00:00:00.001 Started by upstream project "autotest-per-patch" build number 127133 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.072 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.073 The recommended git tool is: git 00:00:00.073 using credential 00000000-0000-0000-0000-000000000002 00:00:00.074 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.121 Fetching changes from the remote Git repository 00:00:00.122 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.171 Using shallow fetch with depth 1 00:00:00.171 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.171 > git --version # timeout=10 00:00:00.205 > git --version # 'git version 2.39.2' 00:00:00.205 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.235 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.235 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.564 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.574 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.586 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:05.586 > git config core.sparsecheckout # timeout=10 00:00:05.598 > git read-tree -mu HEAD # timeout=10 00:00:05.616 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:05.645 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:05.645 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:05.749 [Pipeline] Start of Pipeline 00:00:05.760 [Pipeline] library 00:00:05.762 Loading library shm_lib@master 00:00:05.762 Library shm_lib@master is cached. Copying from home. 00:00:05.773 [Pipeline] node 00:00:05.783 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:05.784 [Pipeline] { 00:00:05.793 [Pipeline] catchError 00:00:05.794 [Pipeline] { 00:00:05.803 [Pipeline] wrap 00:00:05.809 [Pipeline] { 00:00:05.815 [Pipeline] stage 00:00:05.816 [Pipeline] { (Prologue) 00:00:05.829 [Pipeline] echo 00:00:05.830 Node: VM-host-SM9 00:00:05.834 [Pipeline] cleanWs 00:00:05.841 [WS-CLEANUP] Deleting project workspace... 00:00:05.841 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.846 [WS-CLEANUP] done 00:00:06.008 [Pipeline] setCustomBuildProperty 00:00:06.093 [Pipeline] httpRequest 00:00:06.111 [Pipeline] echo 00:00:06.112 Sorcerer 10.211.164.101 is alive 00:00:06.120 [Pipeline] httpRequest 00:00:06.124 HttpMethod: GET 00:00:06.125 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:06.125 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:06.135 Response Code: HTTP/1.1 200 OK 00:00:06.136 Success: Status code 200 is in the accepted range: 200,404 00:00:06.136 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:14.894 [Pipeline] sh 00:00:15.171 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:15.185 [Pipeline] httpRequest 00:00:15.213 [Pipeline] echo 00:00:15.214 Sorcerer 10.211.164.101 is alive 00:00:15.220 [Pipeline] httpRequest 00:00:15.223 HttpMethod: GET 00:00:15.223 URL: http://10.211.164.101/packages/spdk_6a0934c18084780f6143ec4b292cc3b4c86ae60b.tar.gz 00:00:15.224 Sending request to url: http://10.211.164.101/packages/spdk_6a0934c18084780f6143ec4b292cc3b4c86ae60b.tar.gz 00:00:15.238 Response Code: HTTP/1.1 200 OK 00:00:15.238 Success: Status code 200 is in the accepted range: 200,404 00:00:15.239 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_6a0934c18084780f6143ec4b292cc3b4c86ae60b.tar.gz 00:03:10.826 [Pipeline] sh 00:03:11.104 + tar --no-same-owner -xf spdk_6a0934c18084780f6143ec4b292cc3b4c86ae60b.tar.gz 00:03:14.428 [Pipeline] sh 00:03:14.705 + git -C spdk log --oneline -n5 00:03:14.705 6a0934c18 lib/event: Modify spdk_reactor_set_interrupt_mode() to be called from scheduling reactor 00:03:14.705 d005e023b raid: fix empty slot not updated in sb after resize 00:03:14.705 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:03:14.705 8ee2672c4 test/bdev: Add test for resized RAID with superblock 00:03:14.705 19f5787c8 raid: skip configured base bdevs in sb examine 00:03:14.724 [Pipeline] writeFile 00:03:14.740 [Pipeline] sh 00:03:15.022 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:15.034 [Pipeline] sh 00:03:15.314 + cat autorun-spdk.conf 00:03:15.314 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:15.314 SPDK_TEST_NVMF=1 00:03:15.314 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:15.314 SPDK_TEST_USDT=1 00:03:15.314 SPDK_TEST_NVMF_MDNS=1 00:03:15.314 SPDK_RUN_UBSAN=1 00:03:15.314 NET_TYPE=virt 00:03:15.314 SPDK_JSONRPC_GO_CLIENT=1 00:03:15.314 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:15.321 RUN_NIGHTLY=0 00:03:15.323 [Pipeline] } 00:03:15.340 [Pipeline] // stage 00:03:15.356 [Pipeline] stage 00:03:15.358 [Pipeline] { (Run VM) 00:03:15.372 [Pipeline] sh 00:03:15.652 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:15.652 + echo 'Start stage prepare_nvme.sh' 00:03:15.652 Start stage prepare_nvme.sh 00:03:15.652 + [[ -n 0 ]] 00:03:15.652 + disk_prefix=ex0 00:03:15.652 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:03:15.652 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:03:15.652 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:03:15.652 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:15.652 ++ SPDK_TEST_NVMF=1 00:03:15.652 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:15.652 ++ SPDK_TEST_USDT=1 00:03:15.652 ++ SPDK_TEST_NVMF_MDNS=1 00:03:15.652 ++ SPDK_RUN_UBSAN=1 00:03:15.652 ++ NET_TYPE=virt 00:03:15.652 ++ SPDK_JSONRPC_GO_CLIENT=1 00:03:15.652 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:15.652 ++ RUN_NIGHTLY=0 00:03:15.652 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:03:15.652 + nvme_files=() 00:03:15.652 + declare -A nvme_files 00:03:15.652 + backend_dir=/var/lib/libvirt/images/backends 00:03:15.652 + nvme_files['nvme.img']=5G 00:03:15.652 + nvme_files['nvme-cmb.img']=5G 00:03:15.652 + nvme_files['nvme-multi0.img']=4G 00:03:15.652 + nvme_files['nvme-multi1.img']=4G 00:03:15.652 + nvme_files['nvme-multi2.img']=4G 00:03:15.652 + nvme_files['nvme-openstack.img']=8G 00:03:15.652 + nvme_files['nvme-zns.img']=5G 00:03:15.652 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:15.652 + (( SPDK_TEST_FTL == 1 )) 00:03:15.652 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:15.652 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:15.652 + for nvme in "${!nvme_files[@]}" 00:03:15.652 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:03:15.652 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:15.652 + for nvme in "${!nvme_files[@]}" 00:03:15.652 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:03:15.652 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:15.652 + for nvme in "${!nvme_files[@]}" 00:03:15.652 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:03:15.652 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:15.652 + for nvme in "${!nvme_files[@]}" 00:03:15.652 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:03:15.652 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:15.652 + for nvme in "${!nvme_files[@]}" 00:03:15.652 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:03:15.652 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:15.652 + for nvme in "${!nvme_files[@]}" 00:03:15.652 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:03:15.652 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:15.652 + for nvme in "${!nvme_files[@]}" 00:03:15.652 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:03:15.911 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:15.911 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:03:15.911 + echo 'End stage prepare_nvme.sh' 00:03:15.911 End stage prepare_nvme.sh 00:03:15.922 [Pipeline] sh 00:03:16.202 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:16.202 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora38 00:03:16.202 00:03:16.202 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:03:16.202 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:03:16.202 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:03:16.202 HELP=0 00:03:16.202 DRY_RUN=0 00:03:16.202 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:03:16.202 NVME_DISKS_TYPE=nvme,nvme, 00:03:16.202 NVME_AUTO_CREATE=0 00:03:16.202 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:03:16.202 NVME_CMB=,, 00:03:16.202 NVME_PMR=,, 00:03:16.202 NVME_ZNS=,, 00:03:16.202 NVME_MS=,, 00:03:16.202 NVME_FDP=,, 00:03:16.202 SPDK_VAGRANT_DISTRO=fedora38 00:03:16.202 SPDK_VAGRANT_VMCPU=10 00:03:16.202 SPDK_VAGRANT_VMRAM=12288 00:03:16.202 SPDK_VAGRANT_PROVIDER=libvirt 00:03:16.202 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:16.202 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:16.202 SPDK_OPENSTACK_NETWORK=0 00:03:16.202 VAGRANT_PACKAGE_BOX=0 00:03:16.202 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:16.202 FORCE_DISTRO=true 00:03:16.202 VAGRANT_BOX_VERSION= 00:03:16.202 EXTRA_VAGRANTFILES= 00:03:16.202 NIC_MODEL=e1000 00:03:16.202 00:03:16.202 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:03:16.202 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:03:19.487 Bringing machine 'default' up with 'libvirt' provider... 00:03:19.744 ==> default: Creating image (snapshot of base box volume). 00:03:20.002 ==> default: Creating domain with the following settings... 00:03:20.002 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721884163_40fad87a6a8ede90b1d0 00:03:20.002 ==> default: -- Domain type: kvm 00:03:20.002 ==> default: -- Cpus: 10 00:03:20.002 ==> default: -- Feature: acpi 00:03:20.002 ==> default: -- Feature: apic 00:03:20.002 ==> default: -- Feature: pae 00:03:20.002 ==> default: -- Memory: 12288M 00:03:20.002 ==> default: -- Memory Backing: hugepages: 00:03:20.002 ==> default: -- Management MAC: 00:03:20.002 ==> default: -- Loader: 00:03:20.002 ==> default: -- Nvram: 00:03:20.002 ==> default: -- Base box: spdk/fedora38 00:03:20.002 ==> default: -- Storage pool: default 00:03:20.002 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721884163_40fad87a6a8ede90b1d0.img (20G) 00:03:20.002 ==> default: -- Volume Cache: default 00:03:20.002 ==> default: -- Kernel: 00:03:20.002 ==> default: -- Initrd: 00:03:20.002 ==> default: -- Graphics Type: vnc 00:03:20.002 ==> default: -- Graphics Port: -1 00:03:20.002 ==> default: -- Graphics IP: 127.0.0.1 00:03:20.002 ==> default: -- Graphics Password: Not defined 00:03:20.002 ==> default: -- Video Type: cirrus 00:03:20.002 ==> default: -- Video VRAM: 9216 00:03:20.002 ==> default: -- Sound Type: 00:03:20.002 ==> default: -- Keymap: en-us 00:03:20.002 ==> default: -- TPM Path: 00:03:20.002 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:20.002 ==> default: -- Command line args: 00:03:20.002 ==> default: -> value=-device, 00:03:20.002 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:20.002 ==> default: -> value=-drive, 00:03:20.002 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:03:20.002 ==> default: -> value=-device, 00:03:20.002 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:20.002 ==> default: -> value=-device, 00:03:20.002 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:20.002 ==> default: -> value=-drive, 00:03:20.002 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:20.002 ==> default: -> value=-device, 00:03:20.002 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:20.002 ==> default: -> value=-drive, 00:03:20.002 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:20.002 ==> default: -> value=-device, 00:03:20.002 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:20.002 ==> default: -> value=-drive, 00:03:20.002 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:20.002 ==> default: -> value=-device, 00:03:20.002 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:20.002 ==> default: Creating shared folders metadata... 00:03:20.002 ==> default: Starting domain. 00:03:21.906 ==> default: Waiting for domain to get an IP address... 00:03:40.032 ==> default: Waiting for SSH to become available... 00:03:40.032 ==> default: Configuring and enabling network interfaces... 00:03:43.336 default: SSH address: 192.168.121.146:22 00:03:43.336 default: SSH username: vagrant 00:03:43.336 default: SSH auth method: private key 00:03:45.236 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:53.348 ==> default: Mounting SSHFS shared folder... 00:03:53.914 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:03:53.914 ==> default: Checking Mount.. 00:03:55.289 ==> default: Folder Successfully Mounted! 00:03:55.289 ==> default: Running provisioner: file... 00:03:55.855 default: ~/.gitconfig => .gitconfig 00:03:56.114 00:03:56.114 SUCCESS! 00:03:56.114 00:03:56.114 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:03:56.114 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:56.114 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:03:56.114 00:03:56.124 [Pipeline] } 00:03:56.143 [Pipeline] // stage 00:03:56.153 [Pipeline] dir 00:03:56.154 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:03:56.156 [Pipeline] { 00:03:56.171 [Pipeline] catchError 00:03:56.173 [Pipeline] { 00:03:56.188 [Pipeline] sh 00:03:56.488 + + vagrant ssh-config --hostsed vagrant -ne 00:03:56.488 /^Host/,$p 00:03:56.488 + tee ssh_conf 00:04:00.691 Host vagrant 00:04:00.691 HostName 192.168.121.146 00:04:00.691 User vagrant 00:04:00.691 Port 22 00:04:00.691 UserKnownHostsFile /dev/null 00:04:00.691 StrictHostKeyChecking no 00:04:00.691 PasswordAuthentication no 00:04:00.691 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:04:00.691 IdentitiesOnly yes 00:04:00.691 LogLevel FATAL 00:04:00.691 ForwardAgent yes 00:04:00.691 ForwardX11 yes 00:04:00.691 00:04:00.705 [Pipeline] withEnv 00:04:00.707 [Pipeline] { 00:04:00.721 [Pipeline] sh 00:04:00.999 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:00.999 source /etc/os-release 00:04:00.999 [[ -e /image.version ]] && img=$(< /image.version) 00:04:00.999 # Minimal, systemd-like check. 00:04:00.999 if [[ -e /.dockerenv ]]; then 00:04:00.999 # Clear garbage from the node's name: 00:04:00.999 # agt-er_autotest_547-896 -> autotest_547-896 00:04:00.999 # $HOSTNAME is the actual container id 00:04:00.999 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:00.999 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:00.999 # We can assume this is a mount from a host where container is running, 00:04:00.999 # so fetch its hostname to easily identify the target swarm worker. 00:04:00.999 container="$(< /etc/hostname) ($agent)" 00:04:00.999 else 00:04:00.999 # Fallback 00:04:00.999 container=$agent 00:04:00.999 fi 00:04:00.999 fi 00:04:00.999 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:00.999 00:04:01.009 [Pipeline] } 00:04:01.029 [Pipeline] // withEnv 00:04:01.036 [Pipeline] setCustomBuildProperty 00:04:01.050 [Pipeline] stage 00:04:01.053 [Pipeline] { (Tests) 00:04:01.070 [Pipeline] sh 00:04:01.348 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:01.619 [Pipeline] sh 00:04:01.899 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:02.171 [Pipeline] timeout 00:04:02.172 Timeout set to expire in 40 min 00:04:02.173 [Pipeline] { 00:04:02.189 [Pipeline] sh 00:04:02.468 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:03.034 HEAD is now at 6a0934c18 lib/event: Modify spdk_reactor_set_interrupt_mode() to be called from scheduling reactor 00:04:03.045 [Pipeline] sh 00:04:03.325 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:03.593 [Pipeline] sh 00:04:03.869 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:03.885 [Pipeline] sh 00:04:04.163 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:04:04.163 ++ readlink -f spdk_repo 00:04:04.163 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:04.163 + [[ -n /home/vagrant/spdk_repo ]] 00:04:04.163 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:04.163 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:04.163 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:04.163 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:04.163 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:04.163 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:04:04.163 + cd /home/vagrant/spdk_repo 00:04:04.163 + source /etc/os-release 00:04:04.163 ++ NAME='Fedora Linux' 00:04:04.163 ++ VERSION='38 (Cloud Edition)' 00:04:04.163 ++ ID=fedora 00:04:04.163 ++ VERSION_ID=38 00:04:04.163 ++ VERSION_CODENAME= 00:04:04.163 ++ PLATFORM_ID=platform:f38 00:04:04.163 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:04:04.163 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:04.163 ++ LOGO=fedora-logo-icon 00:04:04.163 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:04:04.163 ++ HOME_URL=https://fedoraproject.org/ 00:04:04.163 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:04:04.163 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:04.163 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:04.163 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:04.163 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:04:04.163 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:04.163 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:04:04.163 ++ SUPPORT_END=2024-05-14 00:04:04.163 ++ VARIANT='Cloud Edition' 00:04:04.163 ++ VARIANT_ID=cloud 00:04:04.163 + uname -a 00:04:04.163 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:04:04.163 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:04.729 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.729 Hugepages 00:04:04.729 node hugesize free / total 00:04:04.729 node0 1048576kB 0 / 0 00:04:04.729 node0 2048kB 0 / 0 00:04:04.729 00:04:04.729 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:04.729 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:04.729 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:04.729 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:04.729 + rm -f /tmp/spdk-ld-path 00:04:04.729 + source autorun-spdk.conf 00:04:04.729 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:04.729 ++ SPDK_TEST_NVMF=1 00:04:04.730 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:04.730 ++ SPDK_TEST_USDT=1 00:04:04.730 ++ SPDK_TEST_NVMF_MDNS=1 00:04:04.730 ++ SPDK_RUN_UBSAN=1 00:04:04.730 ++ NET_TYPE=virt 00:04:04.730 ++ SPDK_JSONRPC_GO_CLIENT=1 00:04:04.730 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:04.730 ++ RUN_NIGHTLY=0 00:04:04.730 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:04.730 + [[ -n '' ]] 00:04:04.730 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:04.988 + for M in /var/spdk/build-*-manifest.txt 00:04:04.988 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:04.988 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:04.988 + for M in /var/spdk/build-*-manifest.txt 00:04:04.988 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:04.988 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:04.988 ++ uname 00:04:04.988 + [[ Linux == \L\i\n\u\x ]] 00:04:04.988 + sudo dmesg -T 00:04:04.988 + sudo dmesg --clear 00:04:04.988 + dmesg_pid=5147 00:04:04.988 + [[ Fedora Linux == FreeBSD ]] 00:04:04.988 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:04.988 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:04.988 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:04.988 + [[ -x /usr/src/fio-static/fio ]] 00:04:04.988 + sudo dmesg -Tw 00:04:04.988 + export FIO_BIN=/usr/src/fio-static/fio 00:04:04.988 + FIO_BIN=/usr/src/fio-static/fio 00:04:04.988 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:04.988 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:04.988 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:04.988 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:04.988 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:04.988 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:04.988 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:04.988 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:04.988 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:04.988 Test configuration: 00:04:04.988 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:04.988 SPDK_TEST_NVMF=1 00:04:04.988 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:04.988 SPDK_TEST_USDT=1 00:04:04.988 SPDK_TEST_NVMF_MDNS=1 00:04:04.988 SPDK_RUN_UBSAN=1 00:04:04.988 NET_TYPE=virt 00:04:04.988 SPDK_JSONRPC_GO_CLIENT=1 00:04:04.988 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:04.988 RUN_NIGHTLY=0 05:10:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:04.988 05:10:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:04.988 05:10:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.988 05:10:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.988 05:10:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.988 05:10:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.989 05:10:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.989 05:10:09 -- paths/export.sh@5 -- $ export PATH 00:04:04.989 05:10:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.989 05:10:09 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:04.989 05:10:09 -- common/autobuild_common.sh@447 -- $ date +%s 00:04:04.989 05:10:09 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721884209.XXXXXX 00:04:04.989 05:10:09 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721884209.JbVgqr 00:04:04.989 05:10:09 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:04:04.989 05:10:09 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:04:04.989 05:10:09 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:04.989 05:10:09 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:04.989 05:10:09 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:04.989 05:10:09 -- common/autobuild_common.sh@463 -- $ get_config_params 00:04:04.989 05:10:09 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:04:04.989 05:10:09 -- common/autotest_common.sh@10 -- $ set +x 00:04:04.989 05:10:09 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:04:04.989 05:10:09 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:04:04.989 05:10:09 -- pm/common@17 -- $ local monitor 00:04:04.989 05:10:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.989 05:10:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.989 05:10:09 -- pm/common@25 -- $ sleep 1 00:04:04.989 05:10:09 -- pm/common@21 -- $ date +%s 00:04:04.989 05:10:09 -- pm/common@21 -- $ date +%s 00:04:04.989 05:10:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721884209 00:04:04.989 05:10:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721884209 00:04:05.247 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721884209_collect-vmstat.pm.log 00:04:05.247 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721884209_collect-cpu-load.pm.log 00:04:06.181 05:10:10 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:04:06.181 05:10:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:06.181 05:10:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:06.181 05:10:10 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:06.181 05:10:10 -- spdk/autobuild.sh@16 -- $ date -u 00:04:06.181 Thu Jul 25 05:10:10 AM UTC 2024 00:04:06.181 05:10:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:06.181 v24.09-pre-319-g6a0934c18 00:04:06.181 05:10:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:06.181 05:10:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:06.181 05:10:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:06.181 05:10:10 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:06.181 05:10:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:06.181 05:10:10 -- common/autotest_common.sh@10 -- $ set +x 00:04:06.181 ************************************ 00:04:06.181 START TEST ubsan 00:04:06.181 ************************************ 00:04:06.181 using ubsan 00:04:06.181 05:10:10 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:04:06.181 00:04:06.181 real 0m0.000s 00:04:06.181 user 0m0.000s 00:04:06.181 sys 0m0.000s 00:04:06.181 05:10:10 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:06.181 ************************************ 00:04:06.181 END TEST ubsan 00:04:06.181 ************************************ 00:04:06.181 05:10:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:06.181 05:10:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:06.181 05:10:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:06.181 05:10:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:06.181 05:10:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:06.181 05:10:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:06.181 05:10:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:06.181 05:10:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:06.181 05:10:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:06.181 05:10:10 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:04:06.181 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:06.181 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:06.746 Using 'verbs' RDMA provider 00:04:19.892 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:32.148 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:32.148 go version go1.21.1 linux/amd64 00:04:32.405 Creating mk/config.mk...done. 00:04:32.405 Creating mk/cc.flags.mk...done. 00:04:32.405 Type 'make' to build. 00:04:32.405 05:10:36 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:32.405 05:10:36 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:32.405 05:10:36 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:32.405 05:10:36 -- common/autotest_common.sh@10 -- $ set +x 00:04:32.405 ************************************ 00:04:32.405 START TEST make 00:04:32.405 ************************************ 00:04:32.405 05:10:36 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:32.663 make[1]: Nothing to be done for 'all'. 00:04:50.738 The Meson build system 00:04:50.738 Version: 1.3.1 00:04:50.738 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:50.738 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:50.738 Build type: native build 00:04:50.738 Program cat found: YES (/usr/bin/cat) 00:04:50.738 Project name: DPDK 00:04:50.738 Project version: 24.03.0 00:04:50.738 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:50.738 C linker for the host machine: cc ld.bfd 2.39-16 00:04:50.738 Host machine cpu family: x86_64 00:04:50.738 Host machine cpu: x86_64 00:04:50.738 Message: ## Building in Developer Mode ## 00:04:50.738 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:50.738 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:50.738 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:50.738 Program python3 found: YES (/usr/bin/python3) 00:04:50.738 Program cat found: YES (/usr/bin/cat) 00:04:50.738 Compiler for C supports arguments -march=native: YES 00:04:50.738 Checking for size of "void *" : 8 00:04:50.738 Checking for size of "void *" : 8 (cached) 00:04:50.738 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:04:50.738 Library m found: YES 00:04:50.738 Library numa found: YES 00:04:50.738 Has header "numaif.h" : YES 00:04:50.738 Library fdt found: NO 00:04:50.738 Library execinfo found: NO 00:04:50.738 Has header "execinfo.h" : YES 00:04:50.738 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:50.738 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:50.738 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:50.738 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:50.738 Run-time dependency openssl found: YES 3.0.9 00:04:50.738 Run-time dependency libpcap found: YES 1.10.4 00:04:50.738 Has header "pcap.h" with dependency libpcap: YES 00:04:50.738 Compiler for C supports arguments -Wcast-qual: YES 00:04:50.738 Compiler for C supports arguments -Wdeprecated: YES 00:04:50.738 Compiler for C supports arguments -Wformat: YES 00:04:50.738 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:50.738 Compiler for C supports arguments -Wformat-security: NO 00:04:50.738 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:50.738 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:50.738 Compiler for C supports arguments -Wnested-externs: YES 00:04:50.738 Compiler for C supports arguments -Wold-style-definition: YES 00:04:50.738 Compiler for C supports arguments -Wpointer-arith: YES 00:04:50.738 Compiler for C supports arguments -Wsign-compare: YES 00:04:50.738 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:50.738 Compiler for C supports arguments -Wundef: YES 00:04:50.738 Compiler for C supports arguments -Wwrite-strings: YES 00:04:50.738 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:50.738 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:50.738 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:50.738 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:50.738 Program objdump found: YES (/usr/bin/objdump) 00:04:50.738 Compiler for C supports arguments -mavx512f: YES 00:04:50.738 Checking if "AVX512 checking" compiles: YES 00:04:50.738 Fetching value of define "__SSE4_2__" : 1 00:04:50.738 Fetching value of define "__AES__" : 1 00:04:50.738 Fetching value of define "__AVX__" : 1 00:04:50.738 Fetching value of define "__AVX2__" : 1 00:04:50.738 Fetching value of define "__AVX512BW__" : (undefined) 00:04:50.738 Fetching value of define "__AVX512CD__" : (undefined) 00:04:50.738 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:50.738 Fetching value of define "__AVX512F__" : (undefined) 00:04:50.738 Fetching value of define "__AVX512VL__" : (undefined) 00:04:50.738 Fetching value of define "__PCLMUL__" : 1 00:04:50.738 Fetching value of define "__RDRND__" : 1 00:04:50.738 Fetching value of define "__RDSEED__" : 1 00:04:50.738 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:50.738 Fetching value of define "__znver1__" : (undefined) 00:04:50.738 Fetching value of define "__znver2__" : (undefined) 00:04:50.738 Fetching value of define "__znver3__" : (undefined) 00:04:50.738 Fetching value of define "__znver4__" : (undefined) 00:04:50.738 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:50.738 Message: lib/log: Defining dependency "log" 00:04:50.738 Message: lib/kvargs: Defining dependency "kvargs" 00:04:50.738 Message: lib/telemetry: Defining dependency "telemetry" 00:04:50.738 Checking for function "getentropy" : NO 00:04:50.738 Message: lib/eal: Defining dependency "eal" 00:04:50.738 Message: lib/ring: Defining dependency "ring" 00:04:50.738 Message: lib/rcu: Defining dependency "rcu" 00:04:50.738 Message: lib/mempool: Defining dependency "mempool" 00:04:50.738 Message: lib/mbuf: Defining dependency "mbuf" 00:04:50.738 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:50.738 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:50.738 Compiler for C supports arguments -mpclmul: YES 00:04:50.739 Compiler for C supports arguments -maes: YES 00:04:50.739 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:50.739 Compiler for C supports arguments -mavx512bw: YES 00:04:50.739 Compiler for C supports arguments -mavx512dq: YES 00:04:50.739 Compiler for C supports arguments -mavx512vl: YES 00:04:50.739 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:50.739 Compiler for C supports arguments -mavx2: YES 00:04:50.739 Compiler for C supports arguments -mavx: YES 00:04:50.739 Message: lib/net: Defining dependency "net" 00:04:50.739 Message: lib/meter: Defining dependency "meter" 00:04:50.739 Message: lib/ethdev: Defining dependency "ethdev" 00:04:50.739 Message: lib/pci: Defining dependency "pci" 00:04:50.739 Message: lib/cmdline: Defining dependency "cmdline" 00:04:50.739 Message: lib/hash: Defining dependency "hash" 00:04:50.739 Message: lib/timer: Defining dependency "timer" 00:04:50.739 Message: lib/compressdev: Defining dependency "compressdev" 00:04:50.739 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:50.739 Message: lib/dmadev: Defining dependency "dmadev" 00:04:50.739 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:50.739 Message: lib/power: Defining dependency "power" 00:04:50.739 Message: lib/reorder: Defining dependency "reorder" 00:04:50.739 Message: lib/security: Defining dependency "security" 00:04:50.739 Has header "linux/userfaultfd.h" : YES 00:04:50.739 Has header "linux/vduse.h" : YES 00:04:50.739 Message: lib/vhost: Defining dependency "vhost" 00:04:50.739 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:50.739 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:50.739 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:50.739 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:50.739 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:50.739 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:50.739 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:50.739 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:50.739 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:50.739 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:50.739 Program doxygen found: YES (/usr/bin/doxygen) 00:04:50.739 Configuring doxy-api-html.conf using configuration 00:04:50.739 Configuring doxy-api-man.conf using configuration 00:04:50.739 Program mandb found: YES (/usr/bin/mandb) 00:04:50.739 Program sphinx-build found: NO 00:04:50.739 Configuring rte_build_config.h using configuration 00:04:50.739 Message: 00:04:50.739 ================= 00:04:50.739 Applications Enabled 00:04:50.739 ================= 00:04:50.739 00:04:50.739 apps: 00:04:50.739 00:04:50.739 00:04:50.739 Message: 00:04:50.739 ================= 00:04:50.739 Libraries Enabled 00:04:50.739 ================= 00:04:50.739 00:04:50.739 libs: 00:04:50.739 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:50.739 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:50.739 cryptodev, dmadev, power, reorder, security, vhost, 00:04:50.739 00:04:50.739 Message: 00:04:50.739 =============== 00:04:50.739 Drivers Enabled 00:04:50.739 =============== 00:04:50.739 00:04:50.739 common: 00:04:50.739 00:04:50.739 bus: 00:04:50.739 pci, vdev, 00:04:50.739 mempool: 00:04:50.739 ring, 00:04:50.739 dma: 00:04:50.739 00:04:50.739 net: 00:04:50.739 00:04:50.739 crypto: 00:04:50.739 00:04:50.739 compress: 00:04:50.739 00:04:50.739 vdpa: 00:04:50.739 00:04:50.739 00:04:50.739 Message: 00:04:50.739 ================= 00:04:50.739 Content Skipped 00:04:50.739 ================= 00:04:50.739 00:04:50.739 apps: 00:04:50.739 dumpcap: explicitly disabled via build config 00:04:50.739 graph: explicitly disabled via build config 00:04:50.739 pdump: explicitly disabled via build config 00:04:50.739 proc-info: explicitly disabled via build config 00:04:50.739 test-acl: explicitly disabled via build config 00:04:50.739 test-bbdev: explicitly disabled via build config 00:04:50.739 test-cmdline: explicitly disabled via build config 00:04:50.739 test-compress-perf: explicitly disabled via build config 00:04:50.739 test-crypto-perf: explicitly disabled via build config 00:04:50.739 test-dma-perf: explicitly disabled via build config 00:04:50.739 test-eventdev: explicitly disabled via build config 00:04:50.739 test-fib: explicitly disabled via build config 00:04:50.739 test-flow-perf: explicitly disabled via build config 00:04:50.739 test-gpudev: explicitly disabled via build config 00:04:50.739 test-mldev: explicitly disabled via build config 00:04:50.739 test-pipeline: explicitly disabled via build config 00:04:50.739 test-pmd: explicitly disabled via build config 00:04:50.739 test-regex: explicitly disabled via build config 00:04:50.739 test-sad: explicitly disabled via build config 00:04:50.739 test-security-perf: explicitly disabled via build config 00:04:50.739 00:04:50.739 libs: 00:04:50.739 argparse: explicitly disabled via build config 00:04:50.739 metrics: explicitly disabled via build config 00:04:50.739 acl: explicitly disabled via build config 00:04:50.739 bbdev: explicitly disabled via build config 00:04:50.739 bitratestats: explicitly disabled via build config 00:04:50.739 bpf: explicitly disabled via build config 00:04:50.739 cfgfile: explicitly disabled via build config 00:04:50.739 distributor: explicitly disabled via build config 00:04:50.739 efd: explicitly disabled via build config 00:04:50.739 eventdev: explicitly disabled via build config 00:04:50.739 dispatcher: explicitly disabled via build config 00:04:50.739 gpudev: explicitly disabled via build config 00:04:50.739 gro: explicitly disabled via build config 00:04:50.739 gso: explicitly disabled via build config 00:04:50.739 ip_frag: explicitly disabled via build config 00:04:50.739 jobstats: explicitly disabled via build config 00:04:50.739 latencystats: explicitly disabled via build config 00:04:50.739 lpm: explicitly disabled via build config 00:04:50.739 member: explicitly disabled via build config 00:04:50.739 pcapng: explicitly disabled via build config 00:04:50.739 rawdev: explicitly disabled via build config 00:04:50.739 regexdev: explicitly disabled via build config 00:04:50.739 mldev: explicitly disabled via build config 00:04:50.739 rib: explicitly disabled via build config 00:04:50.739 sched: explicitly disabled via build config 00:04:50.739 stack: explicitly disabled via build config 00:04:50.739 ipsec: explicitly disabled via build config 00:04:50.739 pdcp: explicitly disabled via build config 00:04:50.739 fib: explicitly disabled via build config 00:04:50.739 port: explicitly disabled via build config 00:04:50.739 pdump: explicitly disabled via build config 00:04:50.739 table: explicitly disabled via build config 00:04:50.739 pipeline: explicitly disabled via build config 00:04:50.739 graph: explicitly disabled via build config 00:04:50.739 node: explicitly disabled via build config 00:04:50.739 00:04:50.739 drivers: 00:04:50.739 common/cpt: not in enabled drivers build config 00:04:50.739 common/dpaax: not in enabled drivers build config 00:04:50.739 common/iavf: not in enabled drivers build config 00:04:50.739 common/idpf: not in enabled drivers build config 00:04:50.739 common/ionic: not in enabled drivers build config 00:04:50.739 common/mvep: not in enabled drivers build config 00:04:50.739 common/octeontx: not in enabled drivers build config 00:04:50.739 bus/auxiliary: not in enabled drivers build config 00:04:50.739 bus/cdx: not in enabled drivers build config 00:04:50.739 bus/dpaa: not in enabled drivers build config 00:04:50.739 bus/fslmc: not in enabled drivers build config 00:04:50.739 bus/ifpga: not in enabled drivers build config 00:04:50.739 bus/platform: not in enabled drivers build config 00:04:50.739 bus/uacce: not in enabled drivers build config 00:04:50.739 bus/vmbus: not in enabled drivers build config 00:04:50.739 common/cnxk: not in enabled drivers build config 00:04:50.739 common/mlx5: not in enabled drivers build config 00:04:50.739 common/nfp: not in enabled drivers build config 00:04:50.739 common/nitrox: not in enabled drivers build config 00:04:50.739 common/qat: not in enabled drivers build config 00:04:50.739 common/sfc_efx: not in enabled drivers build config 00:04:50.739 mempool/bucket: not in enabled drivers build config 00:04:50.739 mempool/cnxk: not in enabled drivers build config 00:04:50.739 mempool/dpaa: not in enabled drivers build config 00:04:50.739 mempool/dpaa2: not in enabled drivers build config 00:04:50.739 mempool/octeontx: not in enabled drivers build config 00:04:50.739 mempool/stack: not in enabled drivers build config 00:04:50.739 dma/cnxk: not in enabled drivers build config 00:04:50.739 dma/dpaa: not in enabled drivers build config 00:04:50.739 dma/dpaa2: not in enabled drivers build config 00:04:50.739 dma/hisilicon: not in enabled drivers build config 00:04:50.739 dma/idxd: not in enabled drivers build config 00:04:50.739 dma/ioat: not in enabled drivers build config 00:04:50.739 dma/skeleton: not in enabled drivers build config 00:04:50.739 net/af_packet: not in enabled drivers build config 00:04:50.739 net/af_xdp: not in enabled drivers build config 00:04:50.739 net/ark: not in enabled drivers build config 00:04:50.739 net/atlantic: not in enabled drivers build config 00:04:50.739 net/avp: not in enabled drivers build config 00:04:50.739 net/axgbe: not in enabled drivers build config 00:04:50.739 net/bnx2x: not in enabled drivers build config 00:04:50.739 net/bnxt: not in enabled drivers build config 00:04:50.739 net/bonding: not in enabled drivers build config 00:04:50.739 net/cnxk: not in enabled drivers build config 00:04:50.739 net/cpfl: not in enabled drivers build config 00:04:50.739 net/cxgbe: not in enabled drivers build config 00:04:50.739 net/dpaa: not in enabled drivers build config 00:04:50.739 net/dpaa2: not in enabled drivers build config 00:04:50.739 net/e1000: not in enabled drivers build config 00:04:50.739 net/ena: not in enabled drivers build config 00:04:50.739 net/enetc: not in enabled drivers build config 00:04:50.739 net/enetfec: not in enabled drivers build config 00:04:50.739 net/enic: not in enabled drivers build config 00:04:50.739 net/failsafe: not in enabled drivers build config 00:04:50.740 net/fm10k: not in enabled drivers build config 00:04:50.740 net/gve: not in enabled drivers build config 00:04:50.740 net/hinic: not in enabled drivers build config 00:04:50.740 net/hns3: not in enabled drivers build config 00:04:50.740 net/i40e: not in enabled drivers build config 00:04:50.740 net/iavf: not in enabled drivers build config 00:04:50.740 net/ice: not in enabled drivers build config 00:04:50.740 net/idpf: not in enabled drivers build config 00:04:50.740 net/igc: not in enabled drivers build config 00:04:50.740 net/ionic: not in enabled drivers build config 00:04:50.740 net/ipn3ke: not in enabled drivers build config 00:04:50.740 net/ixgbe: not in enabled drivers build config 00:04:50.740 net/mana: not in enabled drivers build config 00:04:50.740 net/memif: not in enabled drivers build config 00:04:50.740 net/mlx4: not in enabled drivers build config 00:04:50.740 net/mlx5: not in enabled drivers build config 00:04:50.740 net/mvneta: not in enabled drivers build config 00:04:50.740 net/mvpp2: not in enabled drivers build config 00:04:50.740 net/netvsc: not in enabled drivers build config 00:04:50.740 net/nfb: not in enabled drivers build config 00:04:50.740 net/nfp: not in enabled drivers build config 00:04:50.740 net/ngbe: not in enabled drivers build config 00:04:50.740 net/null: not in enabled drivers build config 00:04:50.740 net/octeontx: not in enabled drivers build config 00:04:50.740 net/octeon_ep: not in enabled drivers build config 00:04:50.740 net/pcap: not in enabled drivers build config 00:04:50.740 net/pfe: not in enabled drivers build config 00:04:50.740 net/qede: not in enabled drivers build config 00:04:50.740 net/ring: not in enabled drivers build config 00:04:50.740 net/sfc: not in enabled drivers build config 00:04:50.740 net/softnic: not in enabled drivers build config 00:04:50.740 net/tap: not in enabled drivers build config 00:04:50.740 net/thunderx: not in enabled drivers build config 00:04:50.740 net/txgbe: not in enabled drivers build config 00:04:50.740 net/vdev_netvsc: not in enabled drivers build config 00:04:50.740 net/vhost: not in enabled drivers build config 00:04:50.740 net/virtio: not in enabled drivers build config 00:04:50.740 net/vmxnet3: not in enabled drivers build config 00:04:50.740 raw/*: missing internal dependency, "rawdev" 00:04:50.740 crypto/armv8: not in enabled drivers build config 00:04:50.740 crypto/bcmfs: not in enabled drivers build config 00:04:50.740 crypto/caam_jr: not in enabled drivers build config 00:04:50.740 crypto/ccp: not in enabled drivers build config 00:04:50.740 crypto/cnxk: not in enabled drivers build config 00:04:50.740 crypto/dpaa_sec: not in enabled drivers build config 00:04:50.740 crypto/dpaa2_sec: not in enabled drivers build config 00:04:50.740 crypto/ipsec_mb: not in enabled drivers build config 00:04:50.740 crypto/mlx5: not in enabled drivers build config 00:04:50.740 crypto/mvsam: not in enabled drivers build config 00:04:50.740 crypto/nitrox: not in enabled drivers build config 00:04:50.740 crypto/null: not in enabled drivers build config 00:04:50.740 crypto/octeontx: not in enabled drivers build config 00:04:50.740 crypto/openssl: not in enabled drivers build config 00:04:50.740 crypto/scheduler: not in enabled drivers build config 00:04:50.740 crypto/uadk: not in enabled drivers build config 00:04:50.740 crypto/virtio: not in enabled drivers build config 00:04:50.740 compress/isal: not in enabled drivers build config 00:04:50.740 compress/mlx5: not in enabled drivers build config 00:04:50.740 compress/nitrox: not in enabled drivers build config 00:04:50.740 compress/octeontx: not in enabled drivers build config 00:04:50.740 compress/zlib: not in enabled drivers build config 00:04:50.740 regex/*: missing internal dependency, "regexdev" 00:04:50.740 ml/*: missing internal dependency, "mldev" 00:04:50.740 vdpa/ifc: not in enabled drivers build config 00:04:50.740 vdpa/mlx5: not in enabled drivers build config 00:04:50.740 vdpa/nfp: not in enabled drivers build config 00:04:50.740 vdpa/sfc: not in enabled drivers build config 00:04:50.740 event/*: missing internal dependency, "eventdev" 00:04:50.740 baseband/*: missing internal dependency, "bbdev" 00:04:50.740 gpu/*: missing internal dependency, "gpudev" 00:04:50.740 00:04:50.740 00:04:50.740 Build targets in project: 85 00:04:50.740 00:04:50.740 DPDK 24.03.0 00:04:50.740 00:04:50.740 User defined options 00:04:50.740 buildtype : debug 00:04:50.740 default_library : shared 00:04:50.740 libdir : lib 00:04:50.740 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:50.740 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:50.740 c_link_args : 00:04:50.740 cpu_instruction_set: native 00:04:50.740 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:50.740 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:50.740 enable_docs : false 00:04:50.740 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:50.740 enable_kmods : false 00:04:50.740 max_lcores : 128 00:04:50.740 tests : false 00:04:50.740 00:04:50.740 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:50.740 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:50.740 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:50.740 [2/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:50.740 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:50.740 [4/268] Linking static target lib/librte_kvargs.a 00:04:50.999 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:50.999 [6/268] Linking static target lib/librte_log.a 00:04:51.566 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:51.566 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:51.566 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:51.566 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:51.566 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:51.824 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:51.824 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:51.824 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:52.083 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:52.083 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:52.083 [17/268] Linking static target lib/librte_telemetry.a 00:04:52.083 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:52.083 [19/268] Linking target lib/librte_log.so.24.1 00:04:52.340 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:52.340 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:52.340 [22/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:52.621 [23/268] Linking target lib/librte_kvargs.so.24.1 00:04:52.621 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:52.621 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:52.621 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:52.621 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:52.888 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:52.888 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:52.888 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:52.888 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:53.146 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:53.146 [33/268] Linking target lib/librte_telemetry.so.24.1 00:04:53.404 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:53.404 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:53.663 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:53.921 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:53.921 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:53.921 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:53.921 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:53.921 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:53.921 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:54.180 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:54.180 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:54.438 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:54.438 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:54.438 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:54.696 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:54.954 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:54.954 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:54.954 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:55.213 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:55.213 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:55.470 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:55.470 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:55.729 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:55.729 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:55.986 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:55.986 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:55.986 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:56.244 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:56.244 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:56.503 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:56.503 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:56.503 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:56.761 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:57.020 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:57.020 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:57.279 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:57.279 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:57.279 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:57.537 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:57.537 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:57.537 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:57.537 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:57.537 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:57.796 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:57.796 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:58.056 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:58.314 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:58.314 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:58.572 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:58.572 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:58.572 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:58.572 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:58.572 [86/268] Linking static target lib/librte_eal.a 00:04:58.572 [87/268] Linking static target lib/librte_ring.a 00:04:58.830 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:59.087 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:59.087 [90/268] Linking static target lib/librte_rcu.a 00:04:59.087 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:59.087 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:59.344 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.600 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:59.600 [95/268] Linking static target lib/librte_mempool.a 00:04:59.600 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:59.600 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.600 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:00.165 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:00.165 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:00.165 [101/268] Linking static target lib/librte_mbuf.a 00:05:00.422 [102/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:00.422 [103/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:00.422 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:00.681 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:00.681 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:00.681 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:00.681 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:00.938 [109/268] Linking static target lib/librte_meter.a 00:05:00.938 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:00.938 [111/268] Linking static target lib/librte_net.a 00:05:00.939 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.199 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:01.456 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.457 [115/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.714 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:01.714 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.714 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:01.714 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:01.972 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:02.538 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:02.538 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:02.796 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:02.796 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:03.054 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:03.054 [126/268] Linking static target lib/librte_pci.a 00:05:03.054 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:03.311 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:03.311 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:03.311 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:03.311 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:03.569 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.569 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:03.569 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:03.569 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:03.569 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:03.826 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:03.826 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:03.826 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:03.826 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:03.826 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:03.826 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:03.826 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:03.826 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:04.084 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:04.084 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:04.084 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:04.084 [148/268] Linking static target lib/librte_cmdline.a 00:05:04.650 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:04.650 [150/268] Linking static target lib/librte_ethdev.a 00:05:04.650 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:04.650 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:04.650 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:04.908 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:04.908 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:04.908 [156/268] Linking static target lib/librte_timer.a 00:05:05.166 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:05.733 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:05.733 [159/268] Linking static target lib/librte_compressdev.a 00:05:05.733 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:05.733 [161/268] Linking static target lib/librte_hash.a 00:05:05.733 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.991 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:06.250 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:06.250 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:06.250 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:06.508 [167/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.508 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:06.508 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:06.766 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:06.766 [171/268] Linking static target lib/librte_dmadev.a 00:05:06.766 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:06.766 [173/268] Linking static target lib/librte_cryptodev.a 00:05:07.024 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:07.024 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.282 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:07.540 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.540 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:07.798 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:07.798 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:07.798 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.798 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:08.056 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:08.366 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:08.624 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:08.624 [186/268] Linking static target lib/librte_reorder.a 00:05:08.882 [187/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:08.882 [188/268] Linking static target lib/librte_power.a 00:05:08.882 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:09.140 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:09.140 [191/268] Linking static target lib/librte_security.a 00:05:09.406 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:09.406 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:09.668 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.235 [195/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.235 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:10.235 [197/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.493 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:10.493 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.752 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:11.010 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:11.268 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:11.526 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:11.526 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:11.526 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:11.784 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:12.043 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:12.043 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:12.311 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:12.311 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:12.311 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:12.311 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:12.570 [213/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:12.570 [214/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:12.570 [215/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:12.570 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:12.570 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:12.570 [218/268] Linking static target drivers/librte_bus_vdev.a 00:05:12.570 [219/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:12.570 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:12.570 [221/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:12.570 [222/268] Linking static target drivers/librte_bus_pci.a 00:05:12.828 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:12.828 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:12.828 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:12.828 [226/268] Linking static target drivers/librte_mempool_ring.a 00:05:12.828 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.394 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.394 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.652 [230/268] Linking target lib/librte_eal.so.24.1 00:05:13.652 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:13.911 [232/268] Linking target lib/librte_timer.so.24.1 00:05:13.911 [233/268] Linking target lib/librte_ring.so.24.1 00:05:13.911 [234/268] Linking target lib/librte_dmadev.so.24.1 00:05:13.911 [235/268] Linking target lib/librte_meter.so.24.1 00:05:13.911 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:13.911 [237/268] Linking target lib/librte_pci.so.24.1 00:05:13.911 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:13.911 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:13.911 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:13.911 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:13.911 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:14.169 [243/268] Linking target lib/librte_rcu.so.24.1 00:05:14.169 [244/268] Linking target lib/librte_mempool.so.24.1 00:05:14.169 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:14.170 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:14.170 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:14.427 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:14.427 [249/268] Linking target lib/librte_mbuf.so.24.1 00:05:14.427 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:14.685 [251/268] Linking target lib/librte_reorder.so.24.1 00:05:14.685 [252/268] Linking target lib/librte_compressdev.so.24.1 00:05:14.685 [253/268] Linking target lib/librte_net.so.24.1 00:05:14.685 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:05:14.685 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:14.685 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:14.685 [257/268] Linking target lib/librte_hash.so.24.1 00:05:14.685 [258/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.685 [259/268] Linking target lib/librte_cmdline.so.24.1 00:05:14.943 [260/268] Linking target lib/librte_security.so.24.1 00:05:14.943 [261/268] Linking target lib/librte_ethdev.so.24.1 00:05:14.943 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:14.943 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:15.202 [264/268] Linking target lib/librte_power.so.24.1 00:05:15.202 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:15.202 [266/268] Linking static target lib/librte_vhost.a 00:05:16.577 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.834 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:16.834 INFO: autodetecting backend as ninja 00:05:16.835 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:18.209 CC lib/ut_mock/mock.o 00:05:18.209 CC lib/log/log.o 00:05:18.209 CC lib/log/log_flags.o 00:05:18.209 CC lib/log/log_deprecated.o 00:05:18.209 CC lib/ut/ut.o 00:05:18.209 LIB libspdk_ut_mock.a 00:05:18.209 SO libspdk_ut_mock.so.6.0 00:05:18.209 LIB libspdk_log.a 00:05:18.209 SO libspdk_log.so.7.0 00:05:18.209 LIB libspdk_ut.a 00:05:18.209 SYMLINK libspdk_ut_mock.so 00:05:18.209 SO libspdk_ut.so.2.0 00:05:18.467 SYMLINK libspdk_log.so 00:05:18.467 SYMLINK libspdk_ut.so 00:05:18.467 CC lib/dma/dma.o 00:05:18.467 CC lib/ioat/ioat.o 00:05:18.467 CC lib/util/base64.o 00:05:18.467 CC lib/util/bit_array.o 00:05:18.467 CC lib/util/cpuset.o 00:05:18.467 CC lib/util/crc16.o 00:05:18.467 CC lib/util/crc32.o 00:05:18.467 CC lib/util/crc32c.o 00:05:18.467 CXX lib/trace_parser/trace.o 00:05:18.726 CC lib/vfio_user/host/vfio_user_pci.o 00:05:18.726 CC lib/vfio_user/host/vfio_user.o 00:05:18.726 CC lib/util/crc32_ieee.o 00:05:18.726 CC lib/util/crc64.o 00:05:18.726 CC lib/util/dif.o 00:05:18.726 CC lib/util/fd.o 00:05:18.984 LIB libspdk_dma.a 00:05:18.984 CC lib/util/fd_group.o 00:05:18.984 SO libspdk_dma.so.4.0 00:05:18.984 LIB libspdk_ioat.a 00:05:18.984 CC lib/util/file.o 00:05:18.984 CC lib/util/hexlify.o 00:05:18.984 SYMLINK libspdk_dma.so 00:05:18.984 SO libspdk_ioat.so.7.0 00:05:18.984 CC lib/util/iov.o 00:05:18.984 LIB libspdk_vfio_user.a 00:05:18.984 CC lib/util/math.o 00:05:18.984 CC lib/util/net.o 00:05:19.242 SYMLINK libspdk_ioat.so 00:05:19.242 CC lib/util/pipe.o 00:05:19.242 SO libspdk_vfio_user.so.5.0 00:05:19.242 CC lib/util/strerror_tls.o 00:05:19.242 SYMLINK libspdk_vfio_user.so 00:05:19.242 CC lib/util/string.o 00:05:19.242 CC lib/util/uuid.o 00:05:19.242 CC lib/util/xor.o 00:05:19.242 CC lib/util/zipf.o 00:05:19.808 LIB libspdk_util.a 00:05:19.808 SO libspdk_util.so.10.0 00:05:19.808 LIB libspdk_trace_parser.a 00:05:20.066 SO libspdk_trace_parser.so.5.0 00:05:20.066 SYMLINK libspdk_util.so 00:05:20.066 SYMLINK libspdk_trace_parser.so 00:05:20.324 CC lib/json/json_parse.o 00:05:20.324 CC lib/json/json_util.o 00:05:20.324 CC lib/json/json_write.o 00:05:20.324 CC lib/idxd/idxd_user.o 00:05:20.324 CC lib/idxd/idxd.o 00:05:20.324 CC lib/conf/conf.o 00:05:20.324 CC lib/env_dpdk/env.o 00:05:20.324 CC lib/vmd/vmd.o 00:05:20.324 CC lib/rdma_provider/common.o 00:05:20.324 CC lib/rdma_utils/rdma_utils.o 00:05:20.582 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:20.582 CC lib/idxd/idxd_kernel.o 00:05:20.582 LIB libspdk_conf.a 00:05:20.582 SO libspdk_conf.so.6.0 00:05:20.582 CC lib/vmd/led.o 00:05:20.582 CC lib/env_dpdk/memory.o 00:05:20.582 SYMLINK libspdk_conf.so 00:05:20.582 CC lib/env_dpdk/pci.o 00:05:20.582 LIB libspdk_json.a 00:05:20.582 LIB libspdk_rdma_provider.a 00:05:20.582 CC lib/env_dpdk/init.o 00:05:20.582 LIB libspdk_rdma_utils.a 00:05:20.841 SO libspdk_rdma_provider.so.6.0 00:05:20.841 SO libspdk_json.so.6.0 00:05:20.841 SO libspdk_rdma_utils.so.1.0 00:05:20.841 CC lib/env_dpdk/threads.o 00:05:20.841 SYMLINK libspdk_json.so 00:05:20.841 CC lib/env_dpdk/pci_ioat.o 00:05:20.841 SYMLINK libspdk_rdma_provider.so 00:05:20.841 CC lib/env_dpdk/pci_virtio.o 00:05:20.841 SYMLINK libspdk_rdma_utils.so 00:05:20.841 CC lib/env_dpdk/pci_vmd.o 00:05:20.841 LIB libspdk_vmd.a 00:05:20.841 CC lib/env_dpdk/pci_idxd.o 00:05:21.100 SO libspdk_vmd.so.6.0 00:05:21.100 CC lib/env_dpdk/pci_event.o 00:05:21.100 SYMLINK libspdk_vmd.so 00:05:21.100 CC lib/env_dpdk/sigbus_handler.o 00:05:21.100 CC lib/env_dpdk/pci_dpdk.o 00:05:21.100 CC lib/jsonrpc/jsonrpc_server.o 00:05:21.100 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:21.100 LIB libspdk_idxd.a 00:05:21.100 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:21.100 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:21.100 CC lib/jsonrpc/jsonrpc_client.o 00:05:21.100 SO libspdk_idxd.so.12.0 00:05:21.358 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:21.358 SYMLINK libspdk_idxd.so 00:05:21.616 LIB libspdk_jsonrpc.a 00:05:21.616 SO libspdk_jsonrpc.so.6.0 00:05:21.875 SYMLINK libspdk_jsonrpc.so 00:05:22.133 CC lib/rpc/rpc.o 00:05:22.392 LIB libspdk_rpc.a 00:05:22.392 LIB libspdk_env_dpdk.a 00:05:22.392 SO libspdk_rpc.so.6.0 00:05:22.392 SYMLINK libspdk_rpc.so 00:05:22.392 SO libspdk_env_dpdk.so.15.0 00:05:22.649 CC lib/keyring/keyring.o 00:05:22.649 CC lib/trace/trace_flags.o 00:05:22.649 CC lib/trace/trace_rpc.o 00:05:22.649 CC lib/trace/trace.o 00:05:22.649 CC lib/keyring/keyring_rpc.o 00:05:22.649 CC lib/notify/notify.o 00:05:22.649 CC lib/notify/notify_rpc.o 00:05:22.649 SYMLINK libspdk_env_dpdk.so 00:05:22.908 LIB libspdk_notify.a 00:05:22.908 SO libspdk_notify.so.6.0 00:05:22.908 LIB libspdk_trace.a 00:05:22.908 SO libspdk_trace.so.10.0 00:05:23.166 SYMLINK libspdk_notify.so 00:05:23.166 LIB libspdk_keyring.a 00:05:23.166 SO libspdk_keyring.so.1.0 00:05:23.166 SYMLINK libspdk_trace.so 00:05:23.166 SYMLINK libspdk_keyring.so 00:05:23.425 CC lib/thread/thread.o 00:05:23.425 CC lib/thread/iobuf.o 00:05:23.425 CC lib/sock/sock.o 00:05:23.425 CC lib/sock/sock_rpc.o 00:05:23.993 LIB libspdk_sock.a 00:05:23.993 SO libspdk_sock.so.10.0 00:05:23.993 SYMLINK libspdk_sock.so 00:05:24.251 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:24.251 CC lib/nvme/nvme_ctrlr.o 00:05:24.251 CC lib/nvme/nvme_fabric.o 00:05:24.251 CC lib/nvme/nvme_ns_cmd.o 00:05:24.251 CC lib/nvme/nvme_ns.o 00:05:24.251 CC lib/nvme/nvme_pcie_common.o 00:05:24.251 CC lib/nvme/nvme_pcie.o 00:05:24.251 CC lib/nvme/nvme_qpair.o 00:05:24.251 CC lib/nvme/nvme.o 00:05:25.244 CC lib/nvme/nvme_quirks.o 00:05:25.244 LIB libspdk_thread.a 00:05:25.244 CC lib/nvme/nvme_transport.o 00:05:25.244 SO libspdk_thread.so.10.1 00:05:25.244 SYMLINK libspdk_thread.so 00:05:25.244 CC lib/nvme/nvme_discovery.o 00:05:25.244 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:25.503 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:25.503 CC lib/nvme/nvme_tcp.o 00:05:25.760 CC lib/nvme/nvme_opal.o 00:05:25.760 CC lib/nvme/nvme_io_msg.o 00:05:26.018 CC lib/accel/accel.o 00:05:26.018 CC lib/accel/accel_rpc.o 00:05:26.018 CC lib/accel/accel_sw.o 00:05:26.276 CC lib/nvme/nvme_poll_group.o 00:05:26.276 CC lib/nvme/nvme_zns.o 00:05:26.534 CC lib/nvme/nvme_stubs.o 00:05:26.534 CC lib/blob/blobstore.o 00:05:26.534 CC lib/virtio/virtio.o 00:05:26.534 CC lib/init/json_config.o 00:05:26.793 CC lib/init/subsystem.o 00:05:26.793 CC lib/init/subsystem_rpc.o 00:05:26.793 CC lib/virtio/virtio_vhost_user.o 00:05:27.051 CC lib/init/rpc.o 00:05:27.051 CC lib/virtio/virtio_vfio_user.o 00:05:27.051 CC lib/virtio/virtio_pci.o 00:05:27.051 LIB libspdk_accel.a 00:05:27.051 SO libspdk_accel.so.16.0 00:05:27.310 LIB libspdk_init.a 00:05:27.310 SO libspdk_init.so.5.0 00:05:27.310 SYMLINK libspdk_accel.so 00:05:27.310 CC lib/nvme/nvme_auth.o 00:05:27.310 CC lib/nvme/nvme_cuse.o 00:05:27.310 CC lib/nvme/nvme_rdma.o 00:05:27.310 SYMLINK libspdk_init.so 00:05:27.310 CC lib/blob/request.o 00:05:27.310 CC lib/blob/zeroes.o 00:05:27.568 CC lib/blob/blob_bs_dev.o 00:05:27.568 CC lib/bdev/bdev.o 00:05:27.568 LIB libspdk_virtio.a 00:05:27.568 SO libspdk_virtio.so.7.0 00:05:27.827 CC lib/bdev/bdev_rpc.o 00:05:27.827 SYMLINK libspdk_virtio.so 00:05:27.827 CC lib/bdev/bdev_zone.o 00:05:28.085 CC lib/bdev/part.o 00:05:28.085 CC lib/event/app.o 00:05:28.085 CC lib/event/reactor.o 00:05:28.343 CC lib/bdev/scsi_nvme.o 00:05:28.343 CC lib/event/log_rpc.o 00:05:28.650 CC lib/event/app_rpc.o 00:05:28.650 CC lib/event/scheduler_static.o 00:05:28.907 LIB libspdk_event.a 00:05:28.907 SO libspdk_event.so.14.0 00:05:29.164 SYMLINK libspdk_event.so 00:05:29.164 LIB libspdk_nvme.a 00:05:29.421 SO libspdk_nvme.so.13.1 00:05:29.986 SYMLINK libspdk_nvme.so 00:05:30.556 LIB libspdk_blob.a 00:05:30.556 SO libspdk_blob.so.11.0 00:05:30.814 SYMLINK libspdk_blob.so 00:05:31.116 CC lib/lvol/lvol.o 00:05:31.116 CC lib/blobfs/blobfs.o 00:05:31.116 CC lib/blobfs/tree.o 00:05:31.374 LIB libspdk_bdev.a 00:05:31.374 SO libspdk_bdev.so.16.0 00:05:31.635 SYMLINK libspdk_bdev.so 00:05:31.893 CC lib/scsi/dev.o 00:05:31.893 CC lib/ftl/ftl_core.o 00:05:31.893 CC lib/ftl/ftl_init.o 00:05:31.893 CC lib/scsi/lun.o 00:05:31.893 CC lib/ftl/ftl_layout.o 00:05:31.893 CC lib/nvmf/ctrlr.o 00:05:31.893 CC lib/nbd/nbd.o 00:05:31.893 CC lib/ublk/ublk.o 00:05:31.893 LIB libspdk_blobfs.a 00:05:31.893 SO libspdk_blobfs.so.10.0 00:05:32.152 SYMLINK libspdk_blobfs.so 00:05:32.152 CC lib/nbd/nbd_rpc.o 00:05:32.152 LIB libspdk_lvol.a 00:05:32.152 SO libspdk_lvol.so.10.0 00:05:32.152 SYMLINK libspdk_lvol.so 00:05:32.152 CC lib/ublk/ublk_rpc.o 00:05:32.152 CC lib/ftl/ftl_debug.o 00:05:32.411 CC lib/scsi/port.o 00:05:32.411 CC lib/scsi/scsi.o 00:05:32.411 CC lib/scsi/scsi_bdev.o 00:05:32.411 LIB libspdk_nbd.a 00:05:32.411 CC lib/nvmf/ctrlr_discovery.o 00:05:32.411 SO libspdk_nbd.so.7.0 00:05:32.411 CC lib/nvmf/ctrlr_bdev.o 00:05:32.411 CC lib/nvmf/subsystem.o 00:05:32.411 CC lib/nvmf/nvmf.o 00:05:32.411 CC lib/ftl/ftl_io.o 00:05:32.669 SYMLINK libspdk_nbd.so 00:05:32.669 CC lib/nvmf/nvmf_rpc.o 00:05:32.669 CC lib/nvmf/transport.o 00:05:32.927 CC lib/ftl/ftl_sb.o 00:05:32.927 CC lib/scsi/scsi_pr.o 00:05:32.927 LIB libspdk_ublk.a 00:05:33.215 SO libspdk_ublk.so.3.0 00:05:33.215 SYMLINK libspdk_ublk.so 00:05:33.215 CC lib/scsi/scsi_rpc.o 00:05:33.215 CC lib/scsi/task.o 00:05:33.487 CC lib/ftl/ftl_l2p.o 00:05:33.487 CC lib/ftl/ftl_l2p_flat.o 00:05:33.487 CC lib/ftl/ftl_nv_cache.o 00:05:33.487 CC lib/nvmf/tcp.o 00:05:33.487 CC lib/nvmf/stubs.o 00:05:33.745 CC lib/nvmf/mdns_server.o 00:05:33.745 LIB libspdk_scsi.a 00:05:33.745 CC lib/ftl/ftl_band.o 00:05:33.745 SO libspdk_scsi.so.9.0 00:05:33.745 CC lib/ftl/ftl_band_ops.o 00:05:34.003 CC lib/nvmf/rdma.o 00:05:34.003 SYMLINK libspdk_scsi.so 00:05:34.003 CC lib/nvmf/auth.o 00:05:34.003 CC lib/ftl/ftl_writer.o 00:05:34.262 CC lib/ftl/ftl_rq.o 00:05:34.262 CC lib/iscsi/conn.o 00:05:34.262 CC lib/ftl/ftl_reloc.o 00:05:34.521 CC lib/vhost/vhost.o 00:05:34.521 CC lib/vhost/vhost_rpc.o 00:05:34.521 CC lib/vhost/vhost_scsi.o 00:05:34.521 CC lib/vhost/vhost_blk.o 00:05:34.780 CC lib/vhost/rte_vhost_user.o 00:05:35.039 CC lib/iscsi/init_grp.o 00:05:35.039 CC lib/ftl/ftl_l2p_cache.o 00:05:35.297 CC lib/iscsi/iscsi.o 00:05:35.297 CC lib/iscsi/md5.o 00:05:35.557 CC lib/iscsi/param.o 00:05:35.557 CC lib/ftl/ftl_p2l.o 00:05:35.557 CC lib/ftl/mngt/ftl_mngt.o 00:05:35.833 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:35.833 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:36.090 CC lib/iscsi/portal_grp.o 00:05:36.090 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:36.090 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:36.090 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:36.090 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:36.090 CC lib/iscsi/tgt_node.o 00:05:36.348 CC lib/iscsi/iscsi_subsystem.o 00:05:36.348 CC lib/iscsi/iscsi_rpc.o 00:05:36.348 LIB libspdk_vhost.a 00:05:36.607 SO libspdk_vhost.so.8.0 00:05:36.607 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:36.607 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:36.607 CC lib/iscsi/task.o 00:05:36.607 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:36.607 SYMLINK libspdk_vhost.so 00:05:36.865 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:36.865 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:36.865 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:37.123 CC lib/ftl/utils/ftl_conf.o 00:05:37.123 CC lib/ftl/utils/ftl_md.o 00:05:37.123 CC lib/ftl/utils/ftl_mempool.o 00:05:37.123 CC lib/ftl/utils/ftl_bitmap.o 00:05:37.123 CC lib/ftl/utils/ftl_property.o 00:05:37.123 LIB libspdk_nvmf.a 00:05:37.123 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:37.382 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:37.382 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:37.382 SO libspdk_nvmf.so.19.0 00:05:37.382 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:37.382 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:37.382 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:37.382 LIB libspdk_iscsi.a 00:05:37.640 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:37.640 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:37.640 SO libspdk_iscsi.so.8.0 00:05:37.640 SYMLINK libspdk_nvmf.so 00:05:37.640 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:37.640 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:37.899 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:37.899 CC lib/ftl/base/ftl_base_dev.o 00:05:37.899 CC lib/ftl/base/ftl_base_bdev.o 00:05:37.899 CC lib/ftl/ftl_trace.o 00:05:37.899 SYMLINK libspdk_iscsi.so 00:05:38.465 LIB libspdk_ftl.a 00:05:38.723 SO libspdk_ftl.so.9.0 00:05:38.981 SYMLINK libspdk_ftl.so 00:05:39.548 CC module/env_dpdk/env_dpdk_rpc.o 00:05:39.548 CC module/keyring/file/keyring.o 00:05:39.548 CC module/keyring/linux/keyring.o 00:05:39.548 CC module/sock/posix/posix.o 00:05:39.548 CC module/accel/error/accel_error.o 00:05:39.548 CC module/accel/iaa/accel_iaa.o 00:05:39.548 CC module/accel/ioat/accel_ioat.o 00:05:39.548 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:39.548 CC module/accel/dsa/accel_dsa.o 00:05:39.548 CC module/blob/bdev/blob_bdev.o 00:05:39.548 LIB libspdk_env_dpdk_rpc.a 00:05:39.805 SO libspdk_env_dpdk_rpc.so.6.0 00:05:39.805 CC module/keyring/linux/keyring_rpc.o 00:05:39.805 CC module/keyring/file/keyring_rpc.o 00:05:39.805 LIB libspdk_scheduler_dynamic.a 00:05:39.805 SYMLINK libspdk_env_dpdk_rpc.so 00:05:39.805 CC module/accel/dsa/accel_dsa_rpc.o 00:05:39.805 SO libspdk_scheduler_dynamic.so.4.0 00:05:39.805 CC module/accel/ioat/accel_ioat_rpc.o 00:05:39.805 CC module/accel/error/accel_error_rpc.o 00:05:39.805 LIB libspdk_keyring_linux.a 00:05:39.805 CC module/accel/iaa/accel_iaa_rpc.o 00:05:40.063 SO libspdk_keyring_linux.so.1.0 00:05:40.063 SYMLINK libspdk_scheduler_dynamic.so 00:05:40.063 LIB libspdk_keyring_file.a 00:05:40.063 SYMLINK libspdk_keyring_linux.so 00:05:40.063 SO libspdk_keyring_file.so.1.0 00:05:40.063 LIB libspdk_accel_ioat.a 00:05:40.063 LIB libspdk_blob_bdev.a 00:05:40.063 LIB libspdk_accel_dsa.a 00:05:40.063 SO libspdk_accel_ioat.so.6.0 00:05:40.063 SYMLINK libspdk_keyring_file.so 00:05:40.063 SO libspdk_blob_bdev.so.11.0 00:05:40.063 SO libspdk_accel_dsa.so.5.0 00:05:40.063 LIB libspdk_accel_error.a 00:05:40.063 SYMLINK libspdk_accel_ioat.so 00:05:40.063 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:40.063 LIB libspdk_accel_iaa.a 00:05:40.063 SYMLINK libspdk_blob_bdev.so 00:05:40.063 SO libspdk_accel_error.so.2.0 00:05:40.321 SYMLINK libspdk_accel_dsa.so 00:05:40.321 SO libspdk_accel_iaa.so.3.0 00:05:40.321 CC module/scheduler/gscheduler/gscheduler.o 00:05:40.321 SYMLINK libspdk_accel_error.so 00:05:40.321 SYMLINK libspdk_accel_iaa.so 00:05:40.579 LIB libspdk_scheduler_dpdk_governor.a 00:05:40.579 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:40.579 LIB libspdk_scheduler_gscheduler.a 00:05:40.579 CC module/bdev/delay/vbdev_delay.o 00:05:40.579 CC module/bdev/malloc/bdev_malloc.o 00:05:40.579 CC module/blobfs/bdev/blobfs_bdev.o 00:05:40.579 CC module/bdev/gpt/gpt.o 00:05:40.579 CC module/bdev/error/vbdev_error.o 00:05:40.579 CC module/bdev/lvol/vbdev_lvol.o 00:05:40.579 SO libspdk_scheduler_gscheduler.so.4.0 00:05:40.579 CC module/bdev/null/bdev_null.o 00:05:40.579 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:40.579 SYMLINK libspdk_scheduler_gscheduler.so 00:05:40.579 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:40.841 LIB libspdk_sock_posix.a 00:05:40.841 CC module/bdev/nvme/bdev_nvme.o 00:05:41.115 SO libspdk_sock_posix.so.6.0 00:05:41.115 CC module/bdev/gpt/vbdev_gpt.o 00:05:41.115 LIB libspdk_blobfs_bdev.a 00:05:41.115 CC module/bdev/error/vbdev_error_rpc.o 00:05:41.115 CC module/bdev/null/bdev_null_rpc.o 00:05:41.115 SO libspdk_blobfs_bdev.so.6.0 00:05:41.115 CC module/bdev/passthru/vbdev_passthru.o 00:05:41.115 SYMLINK libspdk_sock_posix.so 00:05:41.115 SYMLINK libspdk_blobfs_bdev.so 00:05:41.115 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:41.373 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:41.373 LIB libspdk_bdev_error.a 00:05:41.373 LIB libspdk_bdev_null.a 00:05:41.373 SO libspdk_bdev_error.so.6.0 00:05:41.373 SO libspdk_bdev_null.so.6.0 00:05:41.373 LIB libspdk_bdev_delay.a 00:05:41.373 CC module/bdev/split/vbdev_split.o 00:05:41.373 CC module/bdev/raid/bdev_raid.o 00:05:41.374 LIB libspdk_bdev_gpt.a 00:05:41.374 SO libspdk_bdev_delay.so.6.0 00:05:41.633 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:41.633 SO libspdk_bdev_gpt.so.6.0 00:05:41.633 SYMLINK libspdk_bdev_error.so 00:05:41.633 SYMLINK libspdk_bdev_null.so 00:05:41.633 LIB libspdk_bdev_malloc.a 00:05:41.633 CC module/bdev/split/vbdev_split_rpc.o 00:05:41.633 CC module/bdev/raid/bdev_raid_rpc.o 00:05:41.633 SYMLINK libspdk_bdev_delay.so 00:05:41.633 CC module/bdev/raid/bdev_raid_sb.o 00:05:41.633 SO libspdk_bdev_malloc.so.6.0 00:05:41.633 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:41.633 SYMLINK libspdk_bdev_gpt.so 00:05:41.633 SYMLINK libspdk_bdev_malloc.so 00:05:41.633 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:41.891 LIB libspdk_bdev_split.a 00:05:41.891 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:41.891 LIB libspdk_bdev_passthru.a 00:05:41.891 SO libspdk_bdev_split.so.6.0 00:05:41.891 CC module/bdev/raid/raid0.o 00:05:41.891 SO libspdk_bdev_passthru.so.6.0 00:05:42.149 SYMLINK libspdk_bdev_split.so 00:05:42.149 SYMLINK libspdk_bdev_passthru.so 00:05:42.149 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:42.149 CC module/bdev/aio/bdev_aio.o 00:05:42.149 LIB libspdk_bdev_lvol.a 00:05:42.149 SO libspdk_bdev_lvol.so.6.0 00:05:42.408 CC module/bdev/ftl/bdev_ftl.o 00:05:42.408 CC module/bdev/iscsi/bdev_iscsi.o 00:05:42.408 SYMLINK libspdk_bdev_lvol.so 00:05:42.408 CC module/bdev/nvme/nvme_rpc.o 00:05:42.408 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:42.408 CC module/bdev/raid/raid1.o 00:05:42.408 LIB libspdk_bdev_zone_block.a 00:05:42.666 SO libspdk_bdev_zone_block.so.6.0 00:05:42.666 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:42.666 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:42.666 SYMLINK libspdk_bdev_zone_block.so 00:05:42.666 CC module/bdev/raid/concat.o 00:05:42.666 CC module/bdev/nvme/bdev_mdns_client.o 00:05:42.666 CC module/bdev/nvme/vbdev_opal.o 00:05:42.666 CC module/bdev/aio/bdev_aio_rpc.o 00:05:42.924 LIB libspdk_bdev_ftl.a 00:05:42.924 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:42.924 SO libspdk_bdev_ftl.so.6.0 00:05:42.924 LIB libspdk_bdev_iscsi.a 00:05:42.924 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:42.924 SYMLINK libspdk_bdev_ftl.so 00:05:42.924 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:42.924 LIB libspdk_bdev_aio.a 00:05:42.924 SO libspdk_bdev_iscsi.so.6.0 00:05:43.183 SO libspdk_bdev_aio.so.6.0 00:05:43.183 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:43.183 SYMLINK libspdk_bdev_iscsi.so 00:05:43.183 SYMLINK libspdk_bdev_aio.so 00:05:43.183 LIB libspdk_bdev_raid.a 00:05:43.447 SO libspdk_bdev_raid.so.6.0 00:05:43.447 SYMLINK libspdk_bdev_raid.so 00:05:43.447 LIB libspdk_bdev_virtio.a 00:05:43.447 SO libspdk_bdev_virtio.so.6.0 00:05:43.719 SYMLINK libspdk_bdev_virtio.so 00:05:44.286 LIB libspdk_bdev_nvme.a 00:05:44.544 SO libspdk_bdev_nvme.so.7.0 00:05:44.544 SYMLINK libspdk_bdev_nvme.so 00:05:45.109 CC module/event/subsystems/keyring/keyring.o 00:05:45.109 CC module/event/subsystems/iobuf/iobuf.o 00:05:45.109 CC module/event/subsystems/vmd/vmd.o 00:05:45.109 CC module/event/subsystems/scheduler/scheduler.o 00:05:45.109 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:45.109 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:45.109 CC module/event/subsystems/sock/sock.o 00:05:45.109 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:45.109 LIB libspdk_event_keyring.a 00:05:45.109 LIB libspdk_event_sock.a 00:05:45.109 LIB libspdk_event_vhost_blk.a 00:05:45.109 LIB libspdk_event_scheduler.a 00:05:45.109 SO libspdk_event_keyring.so.1.0 00:05:45.109 SO libspdk_event_sock.so.5.0 00:05:45.109 SO libspdk_event_vhost_blk.so.3.0 00:05:45.109 SO libspdk_event_scheduler.so.4.0 00:05:45.109 LIB libspdk_event_iobuf.a 00:05:45.109 LIB libspdk_event_vmd.a 00:05:45.368 SYMLINK libspdk_event_keyring.so 00:05:45.368 SO libspdk_event_vmd.so.6.0 00:05:45.368 SO libspdk_event_iobuf.so.3.0 00:05:45.368 SYMLINK libspdk_event_sock.so 00:05:45.368 SYMLINK libspdk_event_vhost_blk.so 00:05:45.368 SYMLINK libspdk_event_scheduler.so 00:05:45.368 SYMLINK libspdk_event_iobuf.so 00:05:45.368 SYMLINK libspdk_event_vmd.so 00:05:45.626 CC module/event/subsystems/accel/accel.o 00:05:45.626 LIB libspdk_event_accel.a 00:05:45.884 SO libspdk_event_accel.so.6.0 00:05:45.884 SYMLINK libspdk_event_accel.so 00:05:46.142 CC module/event/subsystems/bdev/bdev.o 00:05:46.399 LIB libspdk_event_bdev.a 00:05:46.399 SO libspdk_event_bdev.so.6.0 00:05:46.399 SYMLINK libspdk_event_bdev.so 00:05:46.657 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:46.657 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:46.657 CC module/event/subsystems/scsi/scsi.o 00:05:46.657 CC module/event/subsystems/ublk/ublk.o 00:05:46.657 CC module/event/subsystems/nbd/nbd.o 00:05:46.915 LIB libspdk_event_ublk.a 00:05:46.915 SO libspdk_event_ublk.so.3.0 00:05:46.915 LIB libspdk_event_nbd.a 00:05:46.915 LIB libspdk_event_scsi.a 00:05:46.915 SO libspdk_event_nbd.so.6.0 00:05:46.915 SYMLINK libspdk_event_ublk.so 00:05:46.915 SO libspdk_event_scsi.so.6.0 00:05:46.915 SYMLINK libspdk_event_nbd.so 00:05:47.173 SYMLINK libspdk_event_scsi.so 00:05:47.173 LIB libspdk_event_nvmf.a 00:05:47.173 SO libspdk_event_nvmf.so.6.0 00:05:47.173 SYMLINK libspdk_event_nvmf.so 00:05:47.431 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:47.431 CC module/event/subsystems/iscsi/iscsi.o 00:05:47.431 LIB libspdk_event_vhost_scsi.a 00:05:47.431 LIB libspdk_event_iscsi.a 00:05:47.689 SO libspdk_event_vhost_scsi.so.3.0 00:05:47.689 SO libspdk_event_iscsi.so.6.0 00:05:47.689 SYMLINK libspdk_event_vhost_scsi.so 00:05:47.689 SYMLINK libspdk_event_iscsi.so 00:05:47.689 SO libspdk.so.6.0 00:05:47.947 SYMLINK libspdk.so 00:05:47.947 CC app/spdk_lspci/spdk_lspci.o 00:05:47.947 CC app/trace_record/trace_record.o 00:05:47.947 CXX app/trace/trace.o 00:05:48.206 CC app/iscsi_tgt/iscsi_tgt.o 00:05:48.206 CC app/nvmf_tgt/nvmf_main.o 00:05:48.206 CC test/thread/poller_perf/poller_perf.o 00:05:48.206 CC examples/ioat/perf/perf.o 00:05:48.206 CC app/spdk_tgt/spdk_tgt.o 00:05:48.206 LINK spdk_lspci 00:05:48.206 CC examples/util/zipf/zipf.o 00:05:48.464 LINK iscsi_tgt 00:05:48.464 LINK spdk_tgt 00:05:48.464 LINK ioat_perf 00:05:48.464 LINK poller_perf 00:05:48.464 LINK zipf 00:05:48.464 LINK spdk_trace_record 00:05:48.464 LINK nvmf_tgt 00:05:48.722 CC app/spdk_nvme_perf/perf.o 00:05:48.722 LINK spdk_trace 00:05:48.722 CC examples/ioat/verify/verify.o 00:05:48.980 CC app/spdk_nvme_discover/discovery_aer.o 00:05:48.980 CC app/spdk_nvme_identify/identify.o 00:05:48.980 CC app/spdk_top/spdk_top.o 00:05:48.980 LINK verify 00:05:48.980 CC app/spdk_dd/spdk_dd.o 00:05:49.238 CC test/dma/test_dma/test_dma.o 00:05:49.238 CC app/fio/nvme/fio_plugin.o 00:05:49.238 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:49.238 LINK spdk_nvme_discover 00:05:49.496 LINK interrupt_tgt 00:05:49.496 LINK spdk_dd 00:05:49.754 CC app/vhost/vhost.o 00:05:49.754 LINK spdk_nvme_perf 00:05:49.754 LINK test_dma 00:05:50.321 LINK vhost 00:05:50.321 CC test/app/bdev_svc/bdev_svc.o 00:05:50.321 LINK spdk_nvme 00:05:50.321 CC test/app/histogram_perf/histogram_perf.o 00:05:50.321 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:50.321 TEST_HEADER include/spdk/accel.h 00:05:50.321 TEST_HEADER include/spdk/accel_module.h 00:05:50.321 TEST_HEADER include/spdk/assert.h 00:05:50.321 TEST_HEADER include/spdk/barrier.h 00:05:50.321 TEST_HEADER include/spdk/base64.h 00:05:50.321 TEST_HEADER include/spdk/bdev.h 00:05:50.321 TEST_HEADER include/spdk/bdev_module.h 00:05:50.321 TEST_HEADER include/spdk/bdev_zone.h 00:05:50.321 TEST_HEADER include/spdk/bit_array.h 00:05:50.321 TEST_HEADER include/spdk/bit_pool.h 00:05:50.321 TEST_HEADER include/spdk/blob_bdev.h 00:05:50.321 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:50.321 TEST_HEADER include/spdk/blobfs.h 00:05:50.579 TEST_HEADER include/spdk/blob.h 00:05:50.579 TEST_HEADER include/spdk/conf.h 00:05:50.579 TEST_HEADER include/spdk/config.h 00:05:50.579 TEST_HEADER include/spdk/cpuset.h 00:05:50.579 TEST_HEADER include/spdk/crc16.h 00:05:50.579 TEST_HEADER include/spdk/crc32.h 00:05:50.579 TEST_HEADER include/spdk/crc64.h 00:05:50.579 TEST_HEADER include/spdk/dif.h 00:05:50.579 TEST_HEADER include/spdk/dma.h 00:05:50.579 TEST_HEADER include/spdk/endian.h 00:05:50.579 TEST_HEADER include/spdk/env_dpdk.h 00:05:50.579 TEST_HEADER include/spdk/env.h 00:05:50.579 TEST_HEADER include/spdk/event.h 00:05:50.579 TEST_HEADER include/spdk/fd_group.h 00:05:50.579 TEST_HEADER include/spdk/fd.h 00:05:50.579 LINK histogram_perf 00:05:50.579 TEST_HEADER include/spdk/file.h 00:05:50.579 TEST_HEADER include/spdk/ftl.h 00:05:50.579 TEST_HEADER include/spdk/gpt_spec.h 00:05:50.579 TEST_HEADER include/spdk/hexlify.h 00:05:50.579 TEST_HEADER include/spdk/histogram_data.h 00:05:50.579 TEST_HEADER include/spdk/idxd.h 00:05:50.579 TEST_HEADER include/spdk/idxd_spec.h 00:05:50.579 LINK spdk_nvme_identify 00:05:50.579 TEST_HEADER include/spdk/init.h 00:05:50.579 TEST_HEADER include/spdk/ioat.h 00:05:50.579 TEST_HEADER include/spdk/ioat_spec.h 00:05:50.579 TEST_HEADER include/spdk/iscsi_spec.h 00:05:50.579 TEST_HEADER include/spdk/json.h 00:05:50.579 TEST_HEADER include/spdk/jsonrpc.h 00:05:50.579 TEST_HEADER include/spdk/keyring.h 00:05:50.579 TEST_HEADER include/spdk/keyring_module.h 00:05:50.579 LINK bdev_svc 00:05:50.579 TEST_HEADER include/spdk/likely.h 00:05:50.580 TEST_HEADER include/spdk/log.h 00:05:50.580 TEST_HEADER include/spdk/lvol.h 00:05:50.580 TEST_HEADER include/spdk/memory.h 00:05:50.580 TEST_HEADER include/spdk/mmio.h 00:05:50.580 TEST_HEADER include/spdk/nbd.h 00:05:50.580 TEST_HEADER include/spdk/net.h 00:05:50.580 CC app/fio/bdev/fio_plugin.o 00:05:50.580 LINK spdk_top 00:05:50.580 TEST_HEADER include/spdk/notify.h 00:05:50.580 TEST_HEADER include/spdk/nvme.h 00:05:50.580 TEST_HEADER include/spdk/nvme_intel.h 00:05:50.580 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:50.580 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:50.580 TEST_HEADER include/spdk/nvme_spec.h 00:05:50.580 TEST_HEADER include/spdk/nvme_zns.h 00:05:50.580 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:50.580 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:50.580 TEST_HEADER include/spdk/nvmf.h 00:05:50.580 TEST_HEADER include/spdk/nvmf_spec.h 00:05:50.580 TEST_HEADER include/spdk/nvmf_transport.h 00:05:50.580 TEST_HEADER include/spdk/opal.h 00:05:50.838 TEST_HEADER include/spdk/opal_spec.h 00:05:50.838 TEST_HEADER include/spdk/pci_ids.h 00:05:50.838 TEST_HEADER include/spdk/pipe.h 00:05:50.838 TEST_HEADER include/spdk/queue.h 00:05:50.838 TEST_HEADER include/spdk/reduce.h 00:05:50.838 TEST_HEADER include/spdk/rpc.h 00:05:50.838 TEST_HEADER include/spdk/scheduler.h 00:05:50.838 TEST_HEADER include/spdk/scsi.h 00:05:50.838 TEST_HEADER include/spdk/scsi_spec.h 00:05:50.838 TEST_HEADER include/spdk/sock.h 00:05:50.838 TEST_HEADER include/spdk/stdinc.h 00:05:50.838 TEST_HEADER include/spdk/string.h 00:05:50.838 TEST_HEADER include/spdk/thread.h 00:05:50.838 TEST_HEADER include/spdk/trace.h 00:05:50.838 TEST_HEADER include/spdk/trace_parser.h 00:05:50.838 TEST_HEADER include/spdk/tree.h 00:05:50.838 TEST_HEADER include/spdk/ublk.h 00:05:50.838 TEST_HEADER include/spdk/util.h 00:05:50.838 TEST_HEADER include/spdk/uuid.h 00:05:50.838 TEST_HEADER include/spdk/version.h 00:05:50.838 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:50.838 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:50.838 TEST_HEADER include/spdk/vhost.h 00:05:50.838 TEST_HEADER include/spdk/vmd.h 00:05:50.838 TEST_HEADER include/spdk/xor.h 00:05:50.838 TEST_HEADER include/spdk/zipf.h 00:05:50.838 CXX test/cpp_headers/accel.o 00:05:50.838 CC test/env/mem_callbacks/mem_callbacks.o 00:05:50.838 CC test/event/event_perf/event_perf.o 00:05:50.838 LINK nvme_fuzz 00:05:50.838 CC test/app/jsoncat/jsoncat.o 00:05:51.097 CC test/app/stub/stub.o 00:05:51.097 CC test/env/vtophys/vtophys.o 00:05:51.097 LINK event_perf 00:05:51.097 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:51.097 CXX test/cpp_headers/accel_module.o 00:05:51.097 LINK jsoncat 00:05:51.361 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:51.361 LINK vtophys 00:05:51.361 LINK stub 00:05:51.361 LINK env_dpdk_post_init 00:05:51.361 LINK spdk_bdev 00:05:51.361 CC test/event/reactor/reactor.o 00:05:51.361 CXX test/cpp_headers/assert.o 00:05:51.619 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:51.878 LINK reactor 00:05:51.878 CXX test/cpp_headers/barrier.o 00:05:51.878 CC test/rpc_client/rpc_client_test.o 00:05:51.878 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:52.136 LINK mem_callbacks 00:05:52.136 CC test/nvme/aer/aer.o 00:05:52.136 CXX test/cpp_headers/base64.o 00:05:52.394 LINK rpc_client_test 00:05:52.394 CC test/event/reactor_perf/reactor_perf.o 00:05:52.394 CC test/blobfs/mkfs/mkfs.o 00:05:52.394 CC test/accel/dif/dif.o 00:05:52.394 CC test/env/memory/memory_ut.o 00:05:52.652 CXX test/cpp_headers/bdev.o 00:05:52.652 LINK aer 00:05:52.652 LINK reactor_perf 00:05:52.652 LINK vhost_fuzz 00:05:52.652 CC test/env/pci/pci_ut.o 00:05:52.919 LINK mkfs 00:05:52.919 CXX test/cpp_headers/bdev_module.o 00:05:52.919 CC test/event/app_repeat/app_repeat.o 00:05:52.919 CC test/nvme/reset/reset.o 00:05:52.919 LINK dif 00:05:53.177 CC test/event/scheduler/scheduler.o 00:05:53.177 CXX test/cpp_headers/bdev_zone.o 00:05:53.177 LINK app_repeat 00:05:53.177 LINK pci_ut 00:05:53.177 LINK reset 00:05:53.177 LINK scheduler 00:05:53.435 CXX test/cpp_headers/bit_array.o 00:05:53.435 CC test/nvme/sgl/sgl.o 00:05:53.435 CC test/lvol/esnap/esnap.o 00:05:53.693 CXX test/cpp_headers/bit_pool.o 00:05:53.693 CC examples/thread/thread/thread_ex.o 00:05:53.693 CC test/nvme/e2edp/nvme_dp.o 00:05:53.693 LINK sgl 00:05:53.951 CC examples/sock/hello_world/hello_sock.o 00:05:53.951 CC test/bdev/bdevio/bdevio.o 00:05:54.210 CXX test/cpp_headers/blob_bdev.o 00:05:54.210 LINK thread 00:05:54.210 LINK iscsi_fuzz 00:05:54.468 CXX test/cpp_headers/blobfs_bdev.o 00:05:54.468 CC test/nvme/overhead/overhead.o 00:05:54.468 LINK hello_sock 00:05:54.468 CXX test/cpp_headers/blobfs.o 00:05:54.727 LINK nvme_dp 00:05:54.727 LINK memory_ut 00:05:54.727 LINK bdevio 00:05:54.984 CC test/nvme/err_injection/err_injection.o 00:05:54.984 CC test/nvme/startup/startup.o 00:05:54.984 CXX test/cpp_headers/blob.o 00:05:54.984 LINK overhead 00:05:55.242 CC test/nvme/reserve/reserve.o 00:05:55.242 CC test/nvme/simple_copy/simple_copy.o 00:05:55.242 LINK err_injection 00:05:55.242 LINK startup 00:05:55.242 CXX test/cpp_headers/conf.o 00:05:55.500 CC test/nvme/connect_stress/connect_stress.o 00:05:55.500 LINK simple_copy 00:05:55.500 LINK reserve 00:05:55.500 CC examples/vmd/lsvmd/lsvmd.o 00:05:55.757 CXX test/cpp_headers/config.o 00:05:55.757 CXX test/cpp_headers/cpuset.o 00:05:55.757 CC examples/vmd/led/led.o 00:05:55.757 CC test/nvme/boot_partition/boot_partition.o 00:05:55.757 LINK lsvmd 00:05:55.757 CC test/nvme/compliance/nvme_compliance.o 00:05:55.757 LINK connect_stress 00:05:56.014 LINK led 00:05:56.014 CXX test/cpp_headers/crc16.o 00:05:56.272 CC test/nvme/fused_ordering/fused_ordering.o 00:05:56.272 LINK boot_partition 00:05:56.272 CC examples/idxd/perf/perf.o 00:05:56.272 CXX test/cpp_headers/crc32.o 00:05:56.546 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:56.546 LINK nvme_compliance 00:05:56.546 LINK fused_ordering 00:05:56.546 CC examples/accel/perf/accel_perf.o 00:05:56.546 CC test/nvme/fdp/fdp.o 00:05:56.804 CXX test/cpp_headers/crc64.o 00:05:56.804 LINK idxd_perf 00:05:56.804 CXX test/cpp_headers/dif.o 00:05:56.804 LINK doorbell_aers 00:05:56.804 CC examples/blob/hello_world/hello_blob.o 00:05:57.061 CXX test/cpp_headers/dma.o 00:05:57.061 CC examples/nvme/hello_world/hello_world.o 00:05:57.318 CC test/nvme/cuse/cuse.o 00:05:57.318 CC examples/nvme/reconnect/reconnect.o 00:05:57.318 LINK accel_perf 00:05:57.318 LINK hello_blob 00:05:57.318 CC examples/blob/cli/blobcli.o 00:05:57.318 LINK fdp 00:05:57.318 CXX test/cpp_headers/endian.o 00:05:57.576 LINK hello_world 00:05:57.576 CXX test/cpp_headers/env_dpdk.o 00:05:57.576 CXX test/cpp_headers/env.o 00:05:57.576 LINK reconnect 00:05:57.833 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:57.833 CC examples/bdev/hello_world/hello_bdev.o 00:05:58.091 CXX test/cpp_headers/event.o 00:05:58.091 LINK blobcli 00:05:58.091 CC examples/nvme/hotplug/hotplug.o 00:05:58.091 CC examples/nvme/arbitration/arbitration.o 00:05:58.091 CC examples/bdev/bdevperf/bdevperf.o 00:05:58.349 CXX test/cpp_headers/fd_group.o 00:05:58.349 LINK hello_bdev 00:05:58.607 LINK hotplug 00:05:58.607 CXX test/cpp_headers/fd.o 00:05:58.607 CXX test/cpp_headers/file.o 00:05:58.607 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:58.607 CC examples/nvme/abort/abort.o 00:05:58.865 LINK nvme_manage 00:05:58.865 LINK arbitration 00:05:58.865 CXX test/cpp_headers/ftl.o 00:05:58.865 CXX test/cpp_headers/gpt_spec.o 00:05:58.865 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:59.122 LINK cmb_copy 00:05:59.122 LINK cuse 00:05:59.122 CXX test/cpp_headers/hexlify.o 00:05:59.122 CXX test/cpp_headers/histogram_data.o 00:05:59.122 LINK pmr_persistence 00:05:59.122 CXX test/cpp_headers/idxd.o 00:05:59.122 CXX test/cpp_headers/idxd_spec.o 00:05:59.122 CXX test/cpp_headers/init.o 00:05:59.380 CXX test/cpp_headers/ioat.o 00:05:59.380 CXX test/cpp_headers/ioat_spec.o 00:05:59.380 CXX test/cpp_headers/iscsi_spec.o 00:05:59.380 LINK abort 00:05:59.380 CXX test/cpp_headers/json.o 00:05:59.380 CXX test/cpp_headers/jsonrpc.o 00:05:59.637 CXX test/cpp_headers/keyring.o 00:05:59.637 LINK bdevperf 00:05:59.637 CXX test/cpp_headers/keyring_module.o 00:05:59.637 CXX test/cpp_headers/likely.o 00:05:59.637 CXX test/cpp_headers/log.o 00:05:59.637 CXX test/cpp_headers/lvol.o 00:05:59.637 CXX test/cpp_headers/memory.o 00:05:59.637 CXX test/cpp_headers/mmio.o 00:05:59.637 CXX test/cpp_headers/nbd.o 00:05:59.637 CXX test/cpp_headers/net.o 00:05:59.637 CXX test/cpp_headers/notify.o 00:05:59.637 CXX test/cpp_headers/nvme.o 00:05:59.894 CXX test/cpp_headers/nvme_intel.o 00:05:59.894 CXX test/cpp_headers/nvme_ocssd.o 00:05:59.894 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:59.894 CXX test/cpp_headers/nvme_spec.o 00:05:59.894 CXX test/cpp_headers/nvme_zns.o 00:05:59.894 CXX test/cpp_headers/nvmf_cmd.o 00:05:59.894 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:59.894 CXX test/cpp_headers/nvmf.o 00:06:00.151 CXX test/cpp_headers/nvmf_spec.o 00:06:00.151 CXX test/cpp_headers/nvmf_transport.o 00:06:00.151 CC examples/nvmf/nvmf/nvmf.o 00:06:00.151 CXX test/cpp_headers/opal.o 00:06:00.151 CXX test/cpp_headers/opal_spec.o 00:06:00.151 CXX test/cpp_headers/pci_ids.o 00:06:00.151 CXX test/cpp_headers/pipe.o 00:06:00.409 CXX test/cpp_headers/queue.o 00:06:00.409 CXX test/cpp_headers/reduce.o 00:06:00.409 CXX test/cpp_headers/rpc.o 00:06:00.409 CXX test/cpp_headers/scheduler.o 00:06:00.409 CXX test/cpp_headers/scsi.o 00:06:00.409 CXX test/cpp_headers/scsi_spec.o 00:06:00.409 CXX test/cpp_headers/sock.o 00:06:00.409 CXX test/cpp_headers/stdinc.o 00:06:00.409 CXX test/cpp_headers/string.o 00:06:00.409 LINK nvmf 00:06:00.667 CXX test/cpp_headers/thread.o 00:06:00.667 CXX test/cpp_headers/trace.o 00:06:00.667 CXX test/cpp_headers/trace_parser.o 00:06:00.667 CXX test/cpp_headers/tree.o 00:06:00.667 CXX test/cpp_headers/ublk.o 00:06:00.667 CXX test/cpp_headers/util.o 00:06:00.667 CXX test/cpp_headers/uuid.o 00:06:00.667 CXX test/cpp_headers/version.o 00:06:00.667 CXX test/cpp_headers/vfio_user_pci.o 00:06:00.667 CXX test/cpp_headers/vfio_user_spec.o 00:06:00.667 CXX test/cpp_headers/vhost.o 00:06:00.667 CXX test/cpp_headers/vmd.o 00:06:00.667 CXX test/cpp_headers/xor.o 00:06:00.925 CXX test/cpp_headers/zipf.o 00:06:01.492 LINK esnap 00:06:02.060 00:06:02.060 real 1m29.758s 00:06:02.060 user 10m3.616s 00:06:02.060 sys 2m1.110s 00:06:02.060 05:12:06 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:02.060 05:12:06 make -- common/autotest_common.sh@10 -- $ set +x 00:06:02.060 ************************************ 00:06:02.060 END TEST make 00:06:02.060 ************************************ 00:06:02.060 05:12:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:02.060 05:12:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:02.060 05:12:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:02.060 05:12:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.060 05:12:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:02.060 05:12:06 -- pm/common@44 -- $ pid=5182 00:06:02.060 05:12:06 -- pm/common@50 -- $ kill -TERM 5182 00:06:02.060 05:12:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.060 05:12:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:02.060 05:12:06 -- pm/common@44 -- $ pid=5183 00:06:02.060 05:12:06 -- pm/common@50 -- $ kill -TERM 5183 00:06:02.319 05:12:06 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:02.319 05:12:06 -- nvmf/common.sh@7 -- # uname -s 00:06:02.319 05:12:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.319 05:12:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.319 05:12:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.319 05:12:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.319 05:12:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.319 05:12:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.319 05:12:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.319 05:12:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.319 05:12:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.319 05:12:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.319 05:12:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:06:02.319 05:12:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:06:02.319 05:12:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.319 05:12:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.319 05:12:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:02.319 05:12:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.319 05:12:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:02.319 05:12:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.319 05:12:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.319 05:12:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.319 05:12:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.319 05:12:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.319 05:12:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.319 05:12:06 -- paths/export.sh@5 -- # export PATH 00:06:02.319 05:12:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.319 05:12:06 -- nvmf/common.sh@47 -- # : 0 00:06:02.319 05:12:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:02.319 05:12:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:02.319 05:12:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.319 05:12:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.319 05:12:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.319 05:12:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:02.319 05:12:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:02.319 05:12:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:02.319 05:12:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:02.319 05:12:06 -- spdk/autotest.sh@32 -- # uname -s 00:06:02.319 05:12:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:02.319 05:12:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:02.319 05:12:06 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:02.319 05:12:06 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:02.319 05:12:06 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:02.319 05:12:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:02.319 05:12:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:02.319 05:12:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:02.319 05:12:06 -- spdk/autotest.sh@48 -- # udevadm_pid=54793 00:06:02.319 05:12:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:02.319 05:12:06 -- pm/common@17 -- # local monitor 00:06:02.319 05:12:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.319 05:12:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.319 05:12:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:02.319 05:12:06 -- pm/common@25 -- # sleep 1 00:06:02.319 05:12:06 -- pm/common@21 -- # date +%s 00:06:02.319 05:12:06 -- pm/common@21 -- # date +%s 00:06:02.319 05:12:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721884326 00:06:02.319 05:12:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721884326 00:06:02.319 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721884326_collect-vmstat.pm.log 00:06:02.319 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721884326_collect-cpu-load.pm.log 00:06:03.254 05:12:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:03.254 05:12:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:03.254 05:12:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:03.254 05:12:07 -- common/autotest_common.sh@10 -- # set +x 00:06:03.254 05:12:07 -- spdk/autotest.sh@59 -- # create_test_list 00:06:03.254 05:12:07 -- common/autotest_common.sh@748 -- # xtrace_disable 00:06:03.254 05:12:07 -- common/autotest_common.sh@10 -- # set +x 00:06:03.254 05:12:07 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:03.254 05:12:07 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:03.254 05:12:07 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:03.254 05:12:07 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:03.254 05:12:07 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:03.254 05:12:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:03.254 05:12:07 -- common/autotest_common.sh@1455 -- # uname 00:06:03.254 05:12:07 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:03.254 05:12:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:03.254 05:12:07 -- common/autotest_common.sh@1475 -- # uname 00:06:03.254 05:12:07 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:03.254 05:12:07 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:06:03.254 05:12:07 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:06:03.254 05:12:07 -- spdk/autotest.sh@72 -- # hash lcov 00:06:03.254 05:12:07 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:03.254 05:12:07 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:06:03.254 --rc lcov_branch_coverage=1 00:06:03.254 --rc lcov_function_coverage=1 00:06:03.254 --rc genhtml_branch_coverage=1 00:06:03.254 --rc genhtml_function_coverage=1 00:06:03.254 --rc genhtml_legend=1 00:06:03.254 --rc geninfo_all_blocks=1 00:06:03.254 ' 00:06:03.254 05:12:07 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:06:03.254 --rc lcov_branch_coverage=1 00:06:03.254 --rc lcov_function_coverage=1 00:06:03.254 --rc genhtml_branch_coverage=1 00:06:03.254 --rc genhtml_function_coverage=1 00:06:03.254 --rc genhtml_legend=1 00:06:03.254 --rc geninfo_all_blocks=1 00:06:03.255 ' 00:06:03.255 05:12:07 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:06:03.255 --rc lcov_branch_coverage=1 00:06:03.255 --rc lcov_function_coverage=1 00:06:03.255 --rc genhtml_branch_coverage=1 00:06:03.255 --rc genhtml_function_coverage=1 00:06:03.255 --rc genhtml_legend=1 00:06:03.255 --rc geninfo_all_blocks=1 00:06:03.255 --no-external' 00:06:03.255 05:12:07 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:06:03.255 --rc lcov_branch_coverage=1 00:06:03.255 --rc lcov_function_coverage=1 00:06:03.255 --rc genhtml_branch_coverage=1 00:06:03.255 --rc genhtml_function_coverage=1 00:06:03.255 --rc genhtml_legend=1 00:06:03.255 --rc geninfo_all_blocks=1 00:06:03.255 --no-external' 00:06:03.255 05:12:07 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:06:03.513 lcov: LCOV version 1.14 00:06:03.513 05:12:07 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:21.590 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:21.590 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:36.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:36.462 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:36.463 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:36.463 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:38.997 05:12:42 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:06:38.997 05:12:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:38.997 05:12:42 -- common/autotest_common.sh@10 -- # set +x 00:06:38.997 05:12:42 -- spdk/autotest.sh@91 -- # rm -f 00:06:38.997 05:12:42 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:39.564 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:39.564 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:39.564 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:39.564 05:12:43 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:06:39.564 05:12:43 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:39.564 05:12:43 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:39.564 05:12:43 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:39.564 05:12:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:39.564 05:12:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:39.564 05:12:43 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:39.564 05:12:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:39.564 05:12:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:39.564 05:12:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:39.564 05:12:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:39.564 05:12:43 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:39.564 05:12:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:39.564 05:12:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:39.564 05:12:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:39.564 05:12:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:06:39.564 05:12:43 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:06:39.564 05:12:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:39.564 05:12:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:39.564 05:12:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:39.564 05:12:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:06:39.564 05:12:43 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:06:39.564 05:12:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:39.564 05:12:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:39.564 05:12:43 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:06:39.564 05:12:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:39.564 05:12:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:39.564 05:12:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:06:39.564 05:12:43 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:06:39.564 05:12:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:39.564 No valid GPT data, bailing 00:06:39.564 05:12:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:39.564 05:12:43 -- scripts/common.sh@391 -- # pt= 00:06:39.564 05:12:43 -- scripts/common.sh@392 -- # return 1 00:06:39.564 05:12:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:39.564 1+0 records in 00:06:39.564 1+0 records out 00:06:39.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0054041 s, 194 MB/s 00:06:39.564 05:12:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:39.564 05:12:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:39.564 05:12:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:06:39.564 05:12:43 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:06:39.564 05:12:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:39.877 No valid GPT data, bailing 00:06:39.877 05:12:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:39.877 05:12:43 -- scripts/common.sh@391 -- # pt= 00:06:39.877 05:12:43 -- scripts/common.sh@392 -- # return 1 00:06:39.877 05:12:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:39.877 1+0 records in 00:06:39.877 1+0 records out 00:06:39.877 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00437561 s, 240 MB/s 00:06:39.877 05:12:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:39.877 05:12:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:39.877 05:12:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:06:39.877 05:12:43 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:06:39.877 05:12:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:39.877 No valid GPT data, bailing 00:06:39.877 05:12:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:39.877 05:12:43 -- scripts/common.sh@391 -- # pt= 00:06:39.877 05:12:43 -- scripts/common.sh@392 -- # return 1 00:06:39.877 05:12:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:39.877 1+0 records in 00:06:39.877 1+0 records out 00:06:39.877 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00356878 s, 294 MB/s 00:06:39.877 05:12:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:39.877 05:12:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:39.877 05:12:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:06:39.877 05:12:43 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:06:39.877 05:12:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:39.877 No valid GPT data, bailing 00:06:39.877 05:12:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:39.877 05:12:43 -- scripts/common.sh@391 -- # pt= 00:06:39.877 05:12:43 -- scripts/common.sh@392 -- # return 1 00:06:39.877 05:12:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:39.877 1+0 records in 00:06:39.877 1+0 records out 00:06:39.877 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460928 s, 227 MB/s 00:06:39.877 05:12:43 -- spdk/autotest.sh@118 -- # sync 00:06:39.877 05:12:43 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:39.877 05:12:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:39.877 05:12:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:41.778 05:12:45 -- spdk/autotest.sh@124 -- # uname -s 00:06:41.778 05:12:45 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:06:41.778 05:12:45 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:41.778 05:12:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.778 05:12:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.778 05:12:45 -- common/autotest_common.sh@10 -- # set +x 00:06:41.778 ************************************ 00:06:41.778 START TEST setup.sh 00:06:41.778 ************************************ 00:06:41.778 05:12:45 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:41.778 * Looking for test storage... 00:06:41.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:41.778 05:12:45 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:06:41.778 05:12:45 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:06:41.778 05:12:45 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:41.778 05:12:45 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.778 05:12:45 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.778 05:12:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:41.778 ************************************ 00:06:41.778 START TEST acl 00:06:41.778 ************************************ 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:41.778 * Looking for test storage... 00:06:41.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:41.778 05:12:45 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:41.778 05:12:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:41.778 05:12:45 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:06:41.778 05:12:45 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:06:41.778 05:12:45 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:06:41.778 05:12:45 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:06:41.778 05:12:45 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:06:41.778 05:12:45 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:41.778 05:12:45 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:42.714 05:12:46 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:06:42.714 05:12:46 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:06:42.714 05:12:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:42.714 05:12:46 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:06:42.714 05:12:46 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:06:42.714 05:12:46 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:43.281 Hugepages 00:06:43.281 node hugesize free / total 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:43.281 00:06:43.281 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:06:43.281 05:12:47 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:06:43.281 05:12:47 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.281 05:12:47 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.281 05:12:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:43.281 ************************************ 00:06:43.281 START TEST denied 00:06:43.281 ************************************ 00:06:43.282 05:12:47 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:06:43.282 05:12:47 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:06:43.282 05:12:47 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:06:43.282 05:12:47 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:06:43.282 05:12:47 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:43.282 05:12:47 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:06:44.258 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:06:44.258 05:12:48 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:06:44.258 05:12:48 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:06:44.258 05:12:48 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:06:44.258 05:12:48 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:06:44.258 05:12:48 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:06:44.258 05:12:48 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:44.258 05:12:48 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:44.258 05:12:48 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:06:44.258 05:12:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:44.259 05:12:48 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:44.826 00:06:44.826 real 0m1.406s 00:06:44.826 user 0m0.549s 00:06:44.826 sys 0m0.789s 00:06:44.826 05:12:48 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.826 05:12:48 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:06:44.826 ************************************ 00:06:44.826 END TEST denied 00:06:44.826 ************************************ 00:06:44.826 05:12:48 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:06:44.826 05:12:48 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.826 05:12:48 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.826 05:12:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:44.826 ************************************ 00:06:44.826 START TEST allowed 00:06:44.826 ************************************ 00:06:44.826 05:12:48 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:06:44.826 05:12:48 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:06:44.826 05:12:48 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:06:44.826 05:12:48 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:06:44.826 05:12:48 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:06:44.826 05:12:48 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:45.761 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:45.761 05:12:49 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:06:45.761 05:12:49 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:06:45.761 05:12:49 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:06:45.761 05:12:49 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:06:45.761 05:12:49 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:06:45.761 05:12:49 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:45.761 05:12:49 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:45.761 05:12:49 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:06:45.761 05:12:49 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:45.761 05:12:49 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:46.351 00:06:46.351 real 0m1.455s 00:06:46.351 user 0m0.642s 00:06:46.351 sys 0m0.806s 00:06:46.351 05:12:50 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.351 ************************************ 00:06:46.351 END TEST allowed 00:06:46.351 ************************************ 00:06:46.351 05:12:50 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:06:46.351 00:06:46.351 real 0m4.554s 00:06:46.351 user 0m2.018s 00:06:46.352 sys 0m2.483s 00:06:46.352 05:12:50 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.352 05:12:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:46.352 ************************************ 00:06:46.352 END TEST acl 00:06:46.352 ************************************ 00:06:46.352 05:12:50 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:46.352 05:12:50 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.352 05:12:50 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.352 05:12:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:46.352 ************************************ 00:06:46.352 START TEST hugepages 00:06:46.352 ************************************ 00:06:46.352 05:12:50 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:46.352 * Looking for test storage... 00:06:46.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5889740 kB' 'MemAvailable: 7402652 kB' 'Buffers: 2436 kB' 'Cached: 1724372 kB' 'SwapCached: 0 kB' 'Active: 477344 kB' 'Inactive: 1354136 kB' 'Active(anon): 115160 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 106568 kB' 'Mapped: 48696 kB' 'Shmem: 10488 kB' 'KReclaimable: 67060 kB' 'Slab: 140580 kB' 'SReclaimable: 67060 kB' 'SUnreclaim: 73520 kB' 'KernelStack: 6316 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 327128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.352 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:46.353 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:06:46.354 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:06:46.354 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:06:46.354 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:06:46.354 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:06:46.354 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:06:46.354 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:46.354 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:06:46.354 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:46.354 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:46.354 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:06:46.354 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:46.354 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:46.354 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:46.354 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:46.354 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:46.354 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:46.612 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:46.612 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:46.612 05:12:50 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:06:46.612 05:12:50 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.612 05:12:50 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.612 05:12:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:46.612 ************************************ 00:06:46.612 START TEST default_setup 00:06:46.612 ************************************ 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:06:46.612 05:12:50 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:47.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:47.178 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:47.178 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.442 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7987132 kB' 'MemAvailable: 9499892 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 494284 kB' 'Inactive: 1354144 kB' 'Active(anon): 132100 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123276 kB' 'Mapped: 48880 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140184 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73444 kB' 'KernelStack: 6320 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.443 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7986632 kB' 'MemAvailable: 9499392 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 494192 kB' 'Inactive: 1354144 kB' 'Active(anon): 132008 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123136 kB' 'Mapped: 48880 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140172 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73432 kB' 'KernelStack: 6272 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.444 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.445 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7988336 kB' 'MemAvailable: 9501096 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 493992 kB' 'Inactive: 1354144 kB' 'Active(anon): 131808 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122896 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140176 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73436 kB' 'KernelStack: 6240 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.446 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.447 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:06:47.448 nr_hugepages=1024 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:47.448 resv_hugepages=0 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:47.448 surplus_hugepages=0 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:47.448 anon_hugepages=0 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7988192 kB' 'MemAvailable: 9500952 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 494184 kB' 'Inactive: 1354144 kB' 'Active(anon): 132000 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122868 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140176 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73436 kB' 'KernelStack: 6256 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.448 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.449 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7988192 kB' 'MemUsed: 4253784 kB' 'SwapCached: 0 kB' 'Active: 493944 kB' 'Inactive: 1354144 kB' 'Active(anon): 131760 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1726800 kB' 'Mapped: 48700 kB' 'AnonPages: 122884 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66740 kB' 'Slab: 140176 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.450 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:47.451 node0=1024 expecting 1024 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:47.451 00:06:47.451 real 0m0.959s 00:06:47.451 user 0m0.457s 00:06:47.451 sys 0m0.452s 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.451 05:12:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:06:47.451 ************************************ 00:06:47.451 END TEST default_setup 00:06:47.451 ************************************ 00:06:47.451 05:12:51 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:47.451 05:12:51 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.451 05:12:51 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.451 05:12:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:47.451 ************************************ 00:06:47.451 START TEST per_node_1G_alloc 00:06:47.451 ************************************ 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:47.451 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:47.452 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:47.452 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:06:47.452 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:06:47.452 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:47.452 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:47.709 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:47.709 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:47.709 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9030896 kB' 'MemAvailable: 10543664 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 494556 kB' 'Inactive: 1354152 kB' 'Active(anon): 132372 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123252 kB' 'Mapped: 48900 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140172 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73432 kB' 'KernelStack: 6276 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.972 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.973 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9030896 kB' 'MemAvailable: 10543664 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 494008 kB' 'Inactive: 1354152 kB' 'Active(anon): 131824 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123012 kB' 'Mapped: 48704 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140164 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73424 kB' 'KernelStack: 6272 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.974 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.975 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:47.976 05:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9030896 kB' 'MemAvailable: 10543664 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 494056 kB' 'Inactive: 1354152 kB' 'Active(anon): 131872 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123008 kB' 'Mapped: 48704 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140164 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73424 kB' 'KernelStack: 6272 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.976 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.977 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:47.978 nr_hugepages=512 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:47.978 resv_hugepages=0 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:47.978 surplus_hugepages=0 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:47.978 anon_hugepages=0 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9030896 kB' 'MemAvailable: 10543664 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 494012 kB' 'Inactive: 1354152 kB' 'Active(anon): 131828 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123008 kB' 'Mapped: 48704 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140160 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73420 kB' 'KernelStack: 6272 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.978 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.979 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9030896 kB' 'MemUsed: 3211080 kB' 'SwapCached: 0 kB' 'Active: 493704 kB' 'Inactive: 1354152 kB' 'Active(anon): 131520 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1726800 kB' 'Mapped: 48704 kB' 'AnonPages: 122908 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66740 kB' 'Slab: 140160 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.980 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:47.981 node0=512 expecting 512 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:47.981 00:06:47.981 real 0m0.540s 00:06:47.981 user 0m0.277s 00:06:47.981 sys 0m0.269s 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.981 05:12:52 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:47.981 ************************************ 00:06:47.981 END TEST per_node_1G_alloc 00:06:47.981 ************************************ 00:06:47.982 05:12:52 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:47.982 05:12:52 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.982 05:12:52 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.982 05:12:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:47.982 ************************************ 00:06:47.982 START TEST even_2G_alloc 00:06:47.982 ************************************ 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:47.982 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:48.563 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:48.563 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:48.563 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7980604 kB' 'MemAvailable: 9493372 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 494152 kB' 'Inactive: 1354152 kB' 'Active(anon): 131968 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123376 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140100 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73360 kB' 'KernelStack: 6296 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.563 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.564 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7980604 kB' 'MemAvailable: 9493372 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 494008 kB' 'Inactive: 1354152 kB' 'Active(anon): 131824 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122936 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140100 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73360 kB' 'KernelStack: 6264 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.565 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.566 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7980604 kB' 'MemAvailable: 9493372 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 493880 kB' 'Inactive: 1354152 kB' 'Active(anon): 131696 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122804 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140100 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73360 kB' 'KernelStack: 6248 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.567 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:48.568 nr_hugepages=1024 00:06:48.568 resv_hugepages=0 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:48.568 surplus_hugepages=0 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:48.568 anon_hugepages=0 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:48.568 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7980952 kB' 'MemAvailable: 9493720 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 493840 kB' 'Inactive: 1354152 kB' 'Active(anon): 131656 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123024 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140100 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73360 kB' 'KernelStack: 6232 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.569 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.570 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7980952 kB' 'MemUsed: 4261024 kB' 'SwapCached: 0 kB' 'Active: 494036 kB' 'Inactive: 1354152 kB' 'Active(anon): 131852 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1726800 kB' 'Mapped: 48704 kB' 'AnonPages: 122996 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66740 kB' 'Slab: 140104 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.571 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:48.572 node0=1024 expecting 1024 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:48.572 00:06:48.572 real 0m0.486s 00:06:48.572 user 0m0.243s 00:06:48.572 sys 0m0.274s 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.572 05:12:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:48.572 ************************************ 00:06:48.572 END TEST even_2G_alloc 00:06:48.572 ************************************ 00:06:48.572 05:12:52 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:48.572 05:12:52 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.572 05:12:52 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.572 05:12:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:48.572 ************************************ 00:06:48.572 START TEST odd_alloc 00:06:48.572 ************************************ 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:48.572 05:12:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:48.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:49.093 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:49.093 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7978392 kB' 'MemAvailable: 9491160 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 494296 kB' 'Inactive: 1354152 kB' 'Active(anon): 132112 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123220 kB' 'Mapped: 48880 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140120 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73380 kB' 'KernelStack: 6244 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.093 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7978496 kB' 'MemAvailable: 9491264 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 493948 kB' 'Inactive: 1354152 kB' 'Active(anon): 131764 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122836 kB' 'Mapped: 48768 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140116 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73376 kB' 'KernelStack: 6196 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.094 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.095 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7978496 kB' 'MemAvailable: 9491264 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 493948 kB' 'Inactive: 1354152 kB' 'Active(anon): 131764 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123096 kB' 'Mapped: 48768 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140116 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73376 kB' 'KernelStack: 6196 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.096 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.097 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:49.098 nr_hugepages=1025 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:49.098 resv_hugepages=0 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:49.098 surplus_hugepages=0 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:49.098 anon_hugepages=0 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7978244 kB' 'MemAvailable: 9491012 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 493804 kB' 'Inactive: 1354152 kB' 'Active(anon): 131620 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122988 kB' 'Mapped: 48704 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140128 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73388 kB' 'KernelStack: 6256 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.098 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.099 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7978244 kB' 'MemUsed: 4263732 kB' 'SwapCached: 0 kB' 'Active: 494076 kB' 'Inactive: 1354152 kB' 'Active(anon): 131892 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1726800 kB' 'Mapped: 48704 kB' 'AnonPages: 123004 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66740 kB' 'Slab: 140120 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.100 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:49.101 node0=1025 expecting 1025 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:06:49.101 00:06:49.101 real 0m0.504s 00:06:49.101 user 0m0.259s 00:06:49.101 sys 0m0.278s 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.101 05:12:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:49.101 ************************************ 00:06:49.101 END TEST odd_alloc 00:06:49.101 ************************************ 00:06:49.101 05:12:53 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:49.101 05:12:53 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.101 05:12:53 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.101 05:12:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:49.101 ************************************ 00:06:49.101 START TEST custom_alloc 00:06:49.101 ************************************ 00:06:49.101 05:12:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:06:49.101 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:06:49.101 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:06:49.101 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:49.102 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:49.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:49.670 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:49.670 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.670 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9030396 kB' 'MemAvailable: 10543164 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 494072 kB' 'Inactive: 1354152 kB' 'Active(anon): 131888 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123220 kB' 'Mapped: 48932 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140108 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73368 kB' 'KernelStack: 6264 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.671 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9030396 kB' 'MemAvailable: 10543164 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 494144 kB' 'Inactive: 1354152 kB' 'Active(anon): 131960 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123032 kB' 'Mapped: 48824 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140116 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73376 kB' 'KernelStack: 6264 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.672 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.673 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9030396 kB' 'MemAvailable: 10543164 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 494032 kB' 'Inactive: 1354152 kB' 'Active(anon): 131848 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122920 kB' 'Mapped: 48704 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140120 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73380 kB' 'KernelStack: 6224 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.674 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.675 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:49.676 nr_hugepages=512 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:49.676 resv_hugepages=0 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:49.676 surplus_hugepages=0 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:49.676 anon_hugepages=0 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9030396 kB' 'MemAvailable: 10543164 kB' 'Buffers: 2436 kB' 'Cached: 1724364 kB' 'SwapCached: 0 kB' 'Active: 493988 kB' 'Inactive: 1354152 kB' 'Active(anon): 131804 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122880 kB' 'Mapped: 48704 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140120 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73380 kB' 'KernelStack: 6208 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.676 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.677 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9030396 kB' 'MemUsed: 3211580 kB' 'SwapCached: 0 kB' 'Active: 494080 kB' 'Inactive: 1354152 kB' 'Active(anon): 131896 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1726800 kB' 'Mapped: 48704 kB' 'AnonPages: 123044 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66740 kB' 'Slab: 140116 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.678 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:49.679 node0=512 expecting 512 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:49.679 00:06:49.679 real 0m0.513s 00:06:49.679 user 0m0.282s 00:06:49.679 sys 0m0.260s 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.679 05:12:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:49.679 ************************************ 00:06:49.679 END TEST custom_alloc 00:06:49.679 ************************************ 00:06:49.679 05:12:53 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:49.679 05:12:53 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.679 05:12:53 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.679 05:12:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:49.679 ************************************ 00:06:49.679 START TEST no_shrink_alloc 00:06:49.679 ************************************ 00:06:49.679 05:12:53 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:06:49.679 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:49.679 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:49.679 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:49.679 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:06:49.679 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:49.679 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:49.679 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:49.679 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:49.679 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:49.679 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:49.679 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:49.680 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:49.680 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:49.680 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:49.680 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:49.680 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:49.680 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:49.680 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:49.680 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:49.680 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:06:49.680 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:49.680 05:12:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:49.938 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:50.202 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:50.202 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7981548 kB' 'MemAvailable: 9494320 kB' 'Buffers: 2436 kB' 'Cached: 1724368 kB' 'SwapCached: 0 kB' 'Active: 494276 kB' 'Inactive: 1354156 kB' 'Active(anon): 132092 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123168 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140084 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73344 kB' 'KernelStack: 6264 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.202 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:50.203 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7981548 kB' 'MemAvailable: 9494320 kB' 'Buffers: 2436 kB' 'Cached: 1724368 kB' 'SwapCached: 0 kB' 'Active: 494184 kB' 'Inactive: 1354156 kB' 'Active(anon): 132000 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123112 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140100 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73360 kB' 'KernelStack: 6272 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.204 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.205 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7981548 kB' 'MemAvailable: 9494320 kB' 'Buffers: 2436 kB' 'Cached: 1724368 kB' 'SwapCached: 0 kB' 'Active: 494468 kB' 'Inactive: 1354156 kB' 'Active(anon): 132284 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123260 kB' 'Mapped: 48704 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140100 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73360 kB' 'KernelStack: 6288 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.206 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:50.207 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:50.207 nr_hugepages=1024 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:50.208 resv_hugepages=0 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:50.208 surplus_hugepages=0 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:50.208 anon_hugepages=0 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7981548 kB' 'MemAvailable: 9494320 kB' 'Buffers: 2436 kB' 'Cached: 1724368 kB' 'SwapCached: 0 kB' 'Active: 493804 kB' 'Inactive: 1354156 kB' 'Active(anon): 131620 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122804 kB' 'Mapped: 48704 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140092 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73352 kB' 'KernelStack: 6272 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.208 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:50.209 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7981548 kB' 'MemUsed: 4260428 kB' 'SwapCached: 0 kB' 'Active: 493808 kB' 'Inactive: 1354156 kB' 'Active(anon): 131624 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1726804 kB' 'Mapped: 48704 kB' 'AnonPages: 123072 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66740 kB' 'Slab: 140092 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.210 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:50.211 node0=1024 expecting 1024 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:50.211 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:50.469 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:50.738 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:50.738 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:50.738 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:50.738 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:50.738 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7979540 kB' 'MemAvailable: 9492312 kB' 'Buffers: 2436 kB' 'Cached: 1724368 kB' 'SwapCached: 0 kB' 'Active: 494576 kB' 'Inactive: 1354156 kB' 'Active(anon): 132392 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123560 kB' 'Mapped: 48876 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140108 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73368 kB' 'KernelStack: 6260 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.739 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7979540 kB' 'MemAvailable: 9492312 kB' 'Buffers: 2436 kB' 'Cached: 1724368 kB' 'SwapCached: 0 kB' 'Active: 494344 kB' 'Inactive: 1354156 kB' 'Active(anon): 132160 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123328 kB' 'Mapped: 48892 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140088 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73348 kB' 'KernelStack: 6228 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.740 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.741 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7979808 kB' 'MemAvailable: 9492580 kB' 'Buffers: 2436 kB' 'Cached: 1724368 kB' 'SwapCached: 0 kB' 'Active: 490088 kB' 'Inactive: 1354156 kB' 'Active(anon): 127904 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118808 kB' 'Mapped: 48184 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 140092 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 73352 kB' 'KernelStack: 6256 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 326644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.742 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.743 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:50.744 nr_hugepages=1024 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:50.744 resv_hugepages=0 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:50.744 surplus_hugepages=0 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:50.744 anon_hugepages=0 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.744 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7980004 kB' 'MemAvailable: 9492772 kB' 'Buffers: 2436 kB' 'Cached: 1724368 kB' 'SwapCached: 0 kB' 'Active: 489596 kB' 'Inactive: 1354156 kB' 'Active(anon): 127412 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118284 kB' 'Mapped: 48064 kB' 'Shmem: 10464 kB' 'KReclaimable: 66736 kB' 'Slab: 139988 kB' 'SReclaimable: 66736 kB' 'SUnreclaim: 73252 kB' 'KernelStack: 6144 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 326644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.745 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7980652 kB' 'MemUsed: 4261324 kB' 'SwapCached: 0 kB' 'Active: 489060 kB' 'Inactive: 1354156 kB' 'Active(anon): 126876 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1354156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1726804 kB' 'Mapped: 47964 kB' 'AnonPages: 118256 kB' 'Shmem: 10464 kB' 'KernelStack: 6160 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66736 kB' 'Slab: 139984 kB' 'SReclaimable: 66736 kB' 'SUnreclaim: 73248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.746 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.747 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:50.748 node0=1024 expecting 1024 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:50.748 00:06:50.748 real 0m1.046s 00:06:50.748 user 0m0.528s 00:06:50.748 sys 0m0.560s 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.748 05:12:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:50.748 ************************************ 00:06:50.748 END TEST no_shrink_alloc 00:06:50.748 ************************************ 00:06:50.748 05:12:54 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:06:50.748 05:12:54 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:50.748 05:12:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:50.748 05:12:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:50.748 05:12:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:50.748 05:12:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:50.748 05:12:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:50.748 05:12:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:50.748 05:12:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:50.748 00:06:50.748 real 0m4.454s 00:06:50.748 user 0m2.195s 00:06:50.748 sys 0m2.337s 00:06:50.748 05:12:54 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.748 ************************************ 00:06:50.748 END TEST hugepages 00:06:50.748 05:12:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:50.748 ************************************ 00:06:50.748 05:12:54 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:50.748 05:12:54 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.748 05:12:54 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.748 05:12:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:51.032 ************************************ 00:06:51.032 START TEST driver 00:06:51.032 ************************************ 00:06:51.032 05:12:54 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:51.032 * Looking for test storage... 00:06:51.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:51.032 05:12:54 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:06:51.032 05:12:54 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:51.032 05:12:54 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:51.603 05:12:55 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:51.603 05:12:55 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.603 05:12:55 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.603 05:12:55 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:51.603 ************************************ 00:06:51.603 START TEST guess_driver 00:06:51.603 ************************************ 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:06:51.603 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:51.603 Looking for driver=uio_pci_generic 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:06:51.603 05:12:55 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:52.168 05:12:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:06:52.168 05:12:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:06:52.168 05:12:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:52.168 05:12:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:52.168 05:12:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:52.168 05:12:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:52.168 05:12:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:52.168 05:12:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:52.168 05:12:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:52.426 05:12:56 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:52.426 05:12:56 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:52.426 05:12:56 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:52.426 05:12:56 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:52.993 00:06:52.993 real 0m1.379s 00:06:52.993 user 0m0.521s 00:06:52.993 sys 0m0.853s 00:06:52.993 05:12:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.993 05:12:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:52.993 ************************************ 00:06:52.993 END TEST guess_driver 00:06:52.993 ************************************ 00:06:52.993 00:06:52.993 real 0m2.036s 00:06:52.993 user 0m0.750s 00:06:52.993 sys 0m1.336s 00:06:52.993 05:12:56 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.993 05:12:56 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:52.993 ************************************ 00:06:52.993 END TEST driver 00:06:52.993 ************************************ 00:06:52.993 05:12:56 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:52.993 05:12:56 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.993 05:12:56 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.993 05:12:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:52.993 ************************************ 00:06:52.993 START TEST devices 00:06:52.993 ************************************ 00:06:52.993 05:12:56 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:52.993 * Looking for test storage... 00:06:52.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:52.993 05:12:57 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:52.993 05:12:57 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:52.993 05:12:57 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:52.993 05:12:57 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:53.930 05:12:57 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:53.930 05:12:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:53.930 05:12:57 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:53.930 05:12:57 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:53.930 05:12:57 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:53.930 05:12:57 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:53.930 05:12:57 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:53.930 05:12:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:53.930 05:12:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:53.930 05:12:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:53.930 05:12:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:53.930 05:12:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:53.930 05:12:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:53.930 05:12:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:53.930 05:12:57 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:53.930 No valid GPT data, bailing 00:06:53.930 05:12:57 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:53.930 05:12:57 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:53.931 05:12:57 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:53.931 05:12:57 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:53.931 05:12:57 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:53.931 05:12:57 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:06:53.931 05:12:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:06:53.931 05:12:57 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:06:53.931 No valid GPT data, bailing 00:06:53.931 05:12:57 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:06:53.931 05:12:57 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:53.931 05:12:57 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:06:53.931 05:12:57 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:06:53.931 05:12:57 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:06:53.931 05:12:57 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:53.931 05:12:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:06:53.931 05:12:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:06:53.931 05:12:57 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:06:53.931 No valid GPT data, bailing 00:06:53.931 05:12:58 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:06:53.931 05:12:58 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:53.931 05:12:58 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:53.931 05:12:58 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:06:53.931 05:12:58 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:06:53.931 05:12:58 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:06:53.931 05:12:58 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:53.931 05:12:58 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:53.931 05:12:58 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:53.931 05:12:58 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:53.931 05:12:58 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:53.931 05:12:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:06:53.931 05:12:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:06:53.931 05:12:58 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:06:53.931 05:12:58 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:53.931 05:12:58 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:06:53.931 05:12:58 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:06:53.931 05:12:58 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:06:53.931 No valid GPT data, bailing 00:06:53.931 05:12:58 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:54.191 05:12:58 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:54.191 05:12:58 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:54.191 05:12:58 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:06:54.191 05:12:58 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:06:54.191 05:12:58 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:06:54.191 05:12:58 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:06:54.191 05:12:58 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:54.191 05:12:58 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:54.191 05:12:58 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:06:54.191 05:12:58 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:06:54.191 05:12:58 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:54.191 05:12:58 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:54.191 05:12:58 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.191 05:12:58 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.191 05:12:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:54.191 ************************************ 00:06:54.191 START TEST nvme_mount 00:06:54.191 ************************************ 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:54.191 05:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:55.127 Creating new GPT entries in memory. 00:06:55.127 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:55.127 other utilities. 00:06:55.127 05:12:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:55.127 05:12:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:55.127 05:12:59 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:55.127 05:12:59 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:55.127 05:12:59 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:56.062 Creating new GPT entries in memory. 00:06:56.062 The operation has completed successfully. 00:06:56.062 05:13:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:56.062 05:13:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:56.062 05:13:00 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59038 00:06:56.062 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:56.062 05:13:00 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:56.063 05:13:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:56.320 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:56.320 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:56.320 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:56.320 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.320 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:56.320 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.579 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:56.579 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.579 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:56.579 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.579 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:56.579 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:56.579 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:56.579 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:56.579 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:56.579 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:56.579 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:56.579 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:56.579 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:56.579 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:56.579 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:56.579 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:56.579 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:56.838 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:56.838 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:56.838 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:56.838 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:56.838 05:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:56.838 05:13:00 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:56.838 05:13:00 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:56.838 05:13:00 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:56.838 05:13:00 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:56.838 05:13:01 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:56.838 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:56.838 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:56.838 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:56.838 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:56.838 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:56.838 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:57.098 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:57.098 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:57.098 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:57.098 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.098 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:57.098 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:57.098 05:13:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:57.098 05:13:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:57.098 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:57.098 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:57.098 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:57.098 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.098 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:57.098 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:57.386 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.387 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:57.387 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:57.387 05:13:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:57.387 05:13:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:57.644 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:57.644 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:57.644 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:57.644 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.644 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:57.644 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.902 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:57.902 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.902 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:57.902 05:13:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.902 05:13:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:57.902 05:13:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:57.902 05:13:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:57.902 05:13:02 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:57.902 05:13:02 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:57.902 05:13:02 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:57.902 05:13:02 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:57.902 05:13:02 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:57.902 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:57.902 00:06:57.902 real 0m3.930s 00:06:57.902 user 0m0.669s 00:06:57.903 sys 0m1.019s 00:06:57.903 05:13:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.903 ************************************ 00:06:57.903 END TEST nvme_mount 00:06:57.903 05:13:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:57.903 ************************************ 00:06:58.160 05:13:02 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:58.160 05:13:02 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.160 05:13:02 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.160 05:13:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:58.160 ************************************ 00:06:58.160 START TEST dm_mount 00:06:58.160 ************************************ 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:58.160 05:13:02 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:59.094 Creating new GPT entries in memory. 00:06:59.094 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:59.094 other utilities. 00:06:59.094 05:13:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:59.095 05:13:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:59.095 05:13:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:59.095 05:13:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:59.095 05:13:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:07:00.038 Creating new GPT entries in memory. 00:07:00.038 The operation has completed successfully. 00:07:00.038 05:13:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:00.038 05:13:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:00.038 05:13:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:00.038 05:13:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:00.038 05:13:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:07:01.423 The operation has completed successfully. 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59474 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:01.423 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:01.682 05:13:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:01.940 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:01.940 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:07:01.940 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:01.940 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:01.940 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:01.940 05:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:01.940 05:13:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:01.940 05:13:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.199 05:13:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:02.199 05:13:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.199 05:13:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:02.199 05:13:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:02.199 05:13:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:07:02.199 05:13:06 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:07:02.199 05:13:06 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:02.199 05:13:06 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:02.199 05:13:06 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:07:02.199 05:13:06 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:02.199 05:13:06 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:07:02.199 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:02.199 05:13:06 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:02.199 05:13:06 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:07:02.199 00:07:02.199 real 0m4.122s 00:07:02.199 user 0m0.439s 00:07:02.199 sys 0m0.665s 00:07:02.199 05:13:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.199 05:13:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:07:02.199 ************************************ 00:07:02.199 END TEST dm_mount 00:07:02.199 ************************************ 00:07:02.199 05:13:06 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:07:02.199 05:13:06 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:07:02.199 05:13:06 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:02.199 05:13:06 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:02.199 05:13:06 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:02.199 05:13:06 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:02.199 05:13:06 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:02.458 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:02.458 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:02.458 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:02.458 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:02.458 05:13:06 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:07:02.458 05:13:06 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:02.458 05:13:06 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:02.458 05:13:06 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:02.458 05:13:06 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:02.458 05:13:06 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:07:02.458 05:13:06 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:07:02.458 00:07:02.458 real 0m9.573s 00:07:02.458 user 0m1.776s 00:07:02.458 sys 0m2.255s 00:07:02.458 05:13:06 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.458 05:13:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:02.458 ************************************ 00:07:02.458 END TEST devices 00:07:02.458 ************************************ 00:07:02.458 00:07:02.458 real 0m20.879s 00:07:02.458 user 0m6.832s 00:07:02.458 sys 0m8.574s 00:07:02.458 05:13:06 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.458 05:13:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:02.458 ************************************ 00:07:02.458 END TEST setup.sh 00:07:02.458 ************************************ 00:07:02.458 05:13:06 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:03.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:03.393 Hugepages 00:07:03.393 node hugesize free / total 00:07:03.393 node0 1048576kB 0 / 0 00:07:03.393 node0 2048kB 2048 / 2048 00:07:03.393 00:07:03.393 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:03.393 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:03.393 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:07:03.393 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:07:03.393 05:13:07 -- spdk/autotest.sh@130 -- # uname -s 00:07:03.393 05:13:07 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:07:03.393 05:13:07 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:07:03.393 05:13:07 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:03.959 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:04.249 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:04.249 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:04.249 05:13:08 -- common/autotest_common.sh@1532 -- # sleep 1 00:07:05.184 05:13:09 -- common/autotest_common.sh@1533 -- # bdfs=() 00:07:05.184 05:13:09 -- common/autotest_common.sh@1533 -- # local bdfs 00:07:05.184 05:13:09 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:07:05.184 05:13:09 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:07:05.184 05:13:09 -- common/autotest_common.sh@1513 -- # bdfs=() 00:07:05.184 05:13:09 -- common/autotest_common.sh@1513 -- # local bdfs 00:07:05.184 05:13:09 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:05.184 05:13:09 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:07:05.184 05:13:09 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:05.443 05:13:09 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:07:05.443 05:13:09 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:05.443 05:13:09 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:05.701 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:05.701 Waiting for block devices as requested 00:07:05.701 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:05.701 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:05.959 05:13:09 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:07:05.959 05:13:09 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:05.959 05:13:09 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:05.959 05:13:09 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:07:05.959 05:13:09 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:05.959 05:13:09 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:05.959 05:13:09 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:05.959 05:13:09 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:07:05.959 05:13:09 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:07:05.959 05:13:09 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:07:05.959 05:13:09 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:07:05.959 05:13:09 -- common/autotest_common.sh@1545 -- # grep oacs 00:07:05.959 05:13:09 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:07:05.959 05:13:09 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:07:05.959 05:13:09 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:07:05.959 05:13:09 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:07:05.959 05:13:09 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:07:05.959 05:13:09 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:07:05.959 05:13:09 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:07:05.959 05:13:09 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:07:05.959 05:13:09 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:07:05.959 05:13:09 -- common/autotest_common.sh@1557 -- # continue 00:07:05.959 05:13:09 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:07:05.959 05:13:09 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:05.959 05:13:09 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:07:05.959 05:13:09 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:05.959 05:13:09 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:05.959 05:13:09 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:05.959 05:13:09 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:05.959 05:13:09 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:07:05.960 05:13:09 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:07:05.960 05:13:09 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:07:05.960 05:13:09 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:07:05.960 05:13:09 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:07:05.960 05:13:09 -- common/autotest_common.sh@1545 -- # grep oacs 00:07:05.960 05:13:09 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:07:05.960 05:13:09 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:07:05.960 05:13:09 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:07:05.960 05:13:09 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:07:05.960 05:13:09 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:07:05.960 05:13:09 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:07:05.960 05:13:09 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:07:05.960 05:13:09 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:07:05.960 05:13:09 -- common/autotest_common.sh@1557 -- # continue 00:07:05.960 05:13:09 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:07:05.960 05:13:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:05.960 05:13:09 -- common/autotest_common.sh@10 -- # set +x 00:07:05.960 05:13:09 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:07:05.960 05:13:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:05.960 05:13:09 -- common/autotest_common.sh@10 -- # set +x 00:07:05.960 05:13:10 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:06.527 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:06.527 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:06.786 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:06.786 05:13:10 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:07:06.786 05:13:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:06.786 05:13:10 -- common/autotest_common.sh@10 -- # set +x 00:07:06.786 05:13:10 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:07:06.786 05:13:10 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:07:06.786 05:13:10 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:07:06.786 05:13:10 -- common/autotest_common.sh@1577 -- # bdfs=() 00:07:06.786 05:13:10 -- common/autotest_common.sh@1577 -- # local bdfs 00:07:06.786 05:13:10 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:07:06.786 05:13:10 -- common/autotest_common.sh@1513 -- # bdfs=() 00:07:06.786 05:13:10 -- common/autotest_common.sh@1513 -- # local bdfs 00:07:06.786 05:13:10 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:06.786 05:13:10 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:06.786 05:13:10 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:07:06.786 05:13:10 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:07:06.786 05:13:10 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:06.786 05:13:10 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:07:06.786 05:13:10 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:06.786 05:13:10 -- common/autotest_common.sh@1580 -- # device=0x0010 00:07:06.786 05:13:10 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:06.786 05:13:10 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:07:06.786 05:13:10 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:06.786 05:13:10 -- common/autotest_common.sh@1580 -- # device=0x0010 00:07:06.786 05:13:10 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:06.786 05:13:10 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:07:06.786 05:13:10 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:07:06.786 05:13:10 -- common/autotest_common.sh@1593 -- # return 0 00:07:06.786 05:13:10 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:07:06.786 05:13:10 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:07:06.786 05:13:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:06.786 05:13:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:06.786 05:13:10 -- spdk/autotest.sh@162 -- # timing_enter lib 00:07:06.786 05:13:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:06.786 05:13:10 -- common/autotest_common.sh@10 -- # set +x 00:07:06.786 05:13:10 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:07:06.786 05:13:10 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:06.786 05:13:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.786 05:13:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.786 05:13:10 -- common/autotest_common.sh@10 -- # set +x 00:07:06.786 ************************************ 00:07:06.786 START TEST env 00:07:06.786 ************************************ 00:07:06.786 05:13:10 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:07.045 * Looking for test storage... 00:07:07.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:07.045 05:13:11 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:07.045 05:13:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.045 05:13:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.045 05:13:11 env -- common/autotest_common.sh@10 -- # set +x 00:07:07.045 ************************************ 00:07:07.045 START TEST env_memory 00:07:07.045 ************************************ 00:07:07.045 05:13:11 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:07.045 00:07:07.045 00:07:07.045 CUnit - A unit testing framework for C - Version 2.1-3 00:07:07.045 http://cunit.sourceforge.net/ 00:07:07.045 00:07:07.045 00:07:07.045 Suite: memory 00:07:07.045 Test: alloc and free memory map ...[2024-07-25 05:13:11.079870] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:07.045 passed 00:07:07.045 Test: mem map translation ...[2024-07-25 05:13:11.110595] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:07.045 [2024-07-25 05:13:11.110646] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:07.045 [2024-07-25 05:13:11.110702] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:07.045 [2024-07-25 05:13:11.110712] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:07.045 passed 00:07:07.045 Test: mem map registration ...[2024-07-25 05:13:11.175254] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:07.045 [2024-07-25 05:13:11.175307] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:07.045 passed 00:07:07.304 Test: mem map adjacent registrations ...passed 00:07:07.304 00:07:07.304 Run Summary: Type Total Ran Passed Failed Inactive 00:07:07.304 suites 1 1 n/a 0 0 00:07:07.304 tests 4 4 4 0 0 00:07:07.304 asserts 152 152 152 0 n/a 00:07:07.304 00:07:07.304 Elapsed time = 0.215 seconds 00:07:07.304 00:07:07.304 real 0m0.230s 00:07:07.304 user 0m0.216s 00:07:07.304 sys 0m0.012s 00:07:07.304 05:13:11 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.304 05:13:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:07.304 ************************************ 00:07:07.304 END TEST env_memory 00:07:07.304 ************************************ 00:07:07.304 05:13:11 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:07.304 05:13:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.304 05:13:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.304 05:13:11 env -- common/autotest_common.sh@10 -- # set +x 00:07:07.304 ************************************ 00:07:07.304 START TEST env_vtophys 00:07:07.304 ************************************ 00:07:07.304 05:13:11 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:07.304 EAL: lib.eal log level changed from notice to debug 00:07:07.304 EAL: Detected lcore 0 as core 0 on socket 0 00:07:07.304 EAL: Detected lcore 1 as core 0 on socket 0 00:07:07.304 EAL: Detected lcore 2 as core 0 on socket 0 00:07:07.304 EAL: Detected lcore 3 as core 0 on socket 0 00:07:07.304 EAL: Detected lcore 4 as core 0 on socket 0 00:07:07.304 EAL: Detected lcore 5 as core 0 on socket 0 00:07:07.304 EAL: Detected lcore 6 as core 0 on socket 0 00:07:07.304 EAL: Detected lcore 7 as core 0 on socket 0 00:07:07.304 EAL: Detected lcore 8 as core 0 on socket 0 00:07:07.304 EAL: Detected lcore 9 as core 0 on socket 0 00:07:07.304 EAL: Maximum logical cores by configuration: 128 00:07:07.304 EAL: Detected CPU lcores: 10 00:07:07.304 EAL: Detected NUMA nodes: 1 00:07:07.304 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:07.304 EAL: Detected shared linkage of DPDK 00:07:07.304 EAL: No shared files mode enabled, IPC will be disabled 00:07:07.304 EAL: Selected IOVA mode 'PA' 00:07:07.304 EAL: Probing VFIO support... 00:07:07.304 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:07.304 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:07.304 EAL: Ask a virtual area of 0x2e000 bytes 00:07:07.304 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:07.304 EAL: Setting up physically contiguous memory... 00:07:07.304 EAL: Setting maximum number of open files to 524288 00:07:07.304 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:07.304 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:07.304 EAL: Ask a virtual area of 0x61000 bytes 00:07:07.304 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:07.305 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:07.305 EAL: Ask a virtual area of 0x400000000 bytes 00:07:07.305 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:07.305 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:07.305 EAL: Ask a virtual area of 0x61000 bytes 00:07:07.305 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:07.305 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:07.305 EAL: Ask a virtual area of 0x400000000 bytes 00:07:07.305 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:07.305 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:07.305 EAL: Ask a virtual area of 0x61000 bytes 00:07:07.305 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:07.305 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:07.305 EAL: Ask a virtual area of 0x400000000 bytes 00:07:07.305 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:07.305 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:07.305 EAL: Ask a virtual area of 0x61000 bytes 00:07:07.305 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:07.305 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:07.305 EAL: Ask a virtual area of 0x400000000 bytes 00:07:07.305 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:07.305 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:07.305 EAL: Hugepages will be freed exactly as allocated. 00:07:07.305 EAL: No shared files mode enabled, IPC is disabled 00:07:07.305 EAL: No shared files mode enabled, IPC is disabled 00:07:07.305 EAL: TSC frequency is ~2200000 KHz 00:07:07.305 EAL: Main lcore 0 is ready (tid=7f8bd843ba00;cpuset=[0]) 00:07:07.305 EAL: Trying to obtain current memory policy. 00:07:07.305 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:07.305 EAL: Restoring previous memory policy: 0 00:07:07.305 EAL: request: mp_malloc_sync 00:07:07.305 EAL: No shared files mode enabled, IPC is disabled 00:07:07.305 EAL: Heap on socket 0 was expanded by 2MB 00:07:07.305 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:07.305 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:07.305 EAL: Mem event callback 'spdk:(nil)' registered 00:07:07.305 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:07.305 00:07:07.305 00:07:07.305 CUnit - A unit testing framework for C - Version 2.1-3 00:07:07.305 http://cunit.sourceforge.net/ 00:07:07.305 00:07:07.305 00:07:07.305 Suite: components_suite 00:07:07.305 Test: vtophys_malloc_test ...passed 00:07:07.305 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:07.305 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:07.305 EAL: Restoring previous memory policy: 4 00:07:07.305 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.305 EAL: request: mp_malloc_sync 00:07:07.305 EAL: No shared files mode enabled, IPC is disabled 00:07:07.305 EAL: Heap on socket 0 was expanded by 4MB 00:07:07.305 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.305 EAL: request: mp_malloc_sync 00:07:07.305 EAL: No shared files mode enabled, IPC is disabled 00:07:07.305 EAL: Heap on socket 0 was shrunk by 4MB 00:07:07.305 EAL: Trying to obtain current memory policy. 00:07:07.305 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:07.305 EAL: Restoring previous memory policy: 4 00:07:07.305 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.305 EAL: request: mp_malloc_sync 00:07:07.305 EAL: No shared files mode enabled, IPC is disabled 00:07:07.305 EAL: Heap on socket 0 was expanded by 6MB 00:07:07.305 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.305 EAL: request: mp_malloc_sync 00:07:07.305 EAL: No shared files mode enabled, IPC is disabled 00:07:07.305 EAL: Heap on socket 0 was shrunk by 6MB 00:07:07.305 EAL: Trying to obtain current memory policy. 00:07:07.305 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:07.305 EAL: Restoring previous memory policy: 4 00:07:07.305 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.305 EAL: request: mp_malloc_sync 00:07:07.305 EAL: No shared files mode enabled, IPC is disabled 00:07:07.305 EAL: Heap on socket 0 was expanded by 10MB 00:07:07.305 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.305 EAL: request: mp_malloc_sync 00:07:07.305 EAL: No shared files mode enabled, IPC is disabled 00:07:07.305 EAL: Heap on socket 0 was shrunk by 10MB 00:07:07.305 EAL: Trying to obtain current memory policy. 00:07:07.305 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:07.305 EAL: Restoring previous memory policy: 4 00:07:07.305 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.305 EAL: request: mp_malloc_sync 00:07:07.305 EAL: No shared files mode enabled, IPC is disabled 00:07:07.305 EAL: Heap on socket 0 was expanded by 18MB 00:07:07.305 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.305 EAL: request: mp_malloc_sync 00:07:07.305 EAL: No shared files mode enabled, IPC is disabled 00:07:07.305 EAL: Heap on socket 0 was shrunk by 18MB 00:07:07.305 EAL: Trying to obtain current memory policy. 00:07:07.305 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:07.564 EAL: Restoring previous memory policy: 4 00:07:07.564 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.564 EAL: request: mp_malloc_sync 00:07:07.564 EAL: No shared files mode enabled, IPC is disabled 00:07:07.564 EAL: Heap on socket 0 was expanded by 34MB 00:07:07.564 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.564 EAL: request: mp_malloc_sync 00:07:07.564 EAL: No shared files mode enabled, IPC is disabled 00:07:07.564 EAL: Heap on socket 0 was shrunk by 34MB 00:07:07.564 EAL: Trying to obtain current memory policy. 00:07:07.564 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:07.564 EAL: Restoring previous memory policy: 4 00:07:07.564 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.564 EAL: request: mp_malloc_sync 00:07:07.564 EAL: No shared files mode enabled, IPC is disabled 00:07:07.564 EAL: Heap on socket 0 was expanded by 66MB 00:07:07.564 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.564 EAL: request: mp_malloc_sync 00:07:07.564 EAL: No shared files mode enabled, IPC is disabled 00:07:07.564 EAL: Heap on socket 0 was shrunk by 66MB 00:07:07.564 EAL: Trying to obtain current memory policy. 00:07:07.564 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:07.564 EAL: Restoring previous memory policy: 4 00:07:07.564 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.564 EAL: request: mp_malloc_sync 00:07:07.564 EAL: No shared files mode enabled, IPC is disabled 00:07:07.564 EAL: Heap on socket 0 was expanded by 130MB 00:07:07.564 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.564 EAL: request: mp_malloc_sync 00:07:07.564 EAL: No shared files mode enabled, IPC is disabled 00:07:07.564 EAL: Heap on socket 0 was shrunk by 130MB 00:07:07.564 EAL: Trying to obtain current memory policy. 00:07:07.564 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:07.564 EAL: Restoring previous memory policy: 4 00:07:07.564 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.564 EAL: request: mp_malloc_sync 00:07:07.564 EAL: No shared files mode enabled, IPC is disabled 00:07:07.564 EAL: Heap on socket 0 was expanded by 258MB 00:07:07.564 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.564 EAL: request: mp_malloc_sync 00:07:07.564 EAL: No shared files mode enabled, IPC is disabled 00:07:07.564 EAL: Heap on socket 0 was shrunk by 258MB 00:07:07.564 EAL: Trying to obtain current memory policy. 00:07:07.564 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:07.564 EAL: Restoring previous memory policy: 4 00:07:07.564 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.564 EAL: request: mp_malloc_sync 00:07:07.564 EAL: No shared files mode enabled, IPC is disabled 00:07:07.564 EAL: Heap on socket 0 was expanded by 514MB 00:07:07.822 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.823 EAL: request: mp_malloc_sync 00:07:07.823 EAL: No shared files mode enabled, IPC is disabled 00:07:07.823 EAL: Heap on socket 0 was shrunk by 514MB 00:07:07.823 EAL: Trying to obtain current memory policy. 00:07:07.823 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:07.823 EAL: Restoring previous memory policy: 4 00:07:07.823 EAL: Calling mem event callback 'spdk:(nil)' 00:07:07.823 EAL: request: mp_malloc_sync 00:07:07.823 EAL: No shared files mode enabled, IPC is disabled 00:07:07.823 EAL: Heap on socket 0 was expanded by 1026MB 00:07:08.081 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.081 EAL: request: mp_malloc_sync 00:07:08.081 EAL: No shared files mode enabled, IPC is disabled 00:07:08.081 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:08.081 passed 00:07:08.081 00:07:08.081 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.081 suites 1 1 n/a 0 0 00:07:08.081 tests 2 2 2 0 0 00:07:08.081 asserts 5176 5176 5176 0 n/a 00:07:08.081 00:07:08.081 Elapsed time = 0.722 seconds 00:07:08.081 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.081 EAL: request: mp_malloc_sync 00:07:08.081 EAL: No shared files mode enabled, IPC is disabled 00:07:08.081 EAL: Heap on socket 0 was shrunk by 2MB 00:07:08.081 EAL: No shared files mode enabled, IPC is disabled 00:07:08.081 EAL: No shared files mode enabled, IPC is disabled 00:07:08.081 EAL: No shared files mode enabled, IPC is disabled 00:07:08.081 00:07:08.081 real 0m0.917s 00:07:08.081 user 0m0.470s 00:07:08.081 sys 0m0.311s 00:07:08.081 05:13:12 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.081 05:13:12 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:08.081 ************************************ 00:07:08.081 END TEST env_vtophys 00:07:08.081 ************************************ 00:07:08.339 05:13:12 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:08.339 05:13:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.339 05:13:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.339 05:13:12 env -- common/autotest_common.sh@10 -- # set +x 00:07:08.339 ************************************ 00:07:08.339 START TEST env_pci 00:07:08.339 ************************************ 00:07:08.339 05:13:12 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:08.339 00:07:08.339 00:07:08.339 CUnit - A unit testing framework for C - Version 2.1-3 00:07:08.339 http://cunit.sourceforge.net/ 00:07:08.339 00:07:08.339 00:07:08.339 Suite: pci 00:07:08.340 Test: pci_hook ...[2024-07-25 05:13:12.287555] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60659 has claimed it 00:07:08.340 passed 00:07:08.340 00:07:08.340 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.340 suites 1 1 n/a 0 0 00:07:08.340 tests 1 1 1 0 0 00:07:08.340 asserts 25 25 25 0 n/a 00:07:08.340 00:07:08.340 Elapsed time = 0.002 seconds 00:07:08.340 EAL: Cannot find device (10000:00:01.0) 00:07:08.340 EAL: Failed to attach device on primary process 00:07:08.340 00:07:08.340 real 0m0.019s 00:07:08.340 user 0m0.008s 00:07:08.340 sys 0m0.011s 00:07:08.340 05:13:12 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.340 ************************************ 00:07:08.340 05:13:12 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:08.340 END TEST env_pci 00:07:08.340 ************************************ 00:07:08.340 05:13:12 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:08.340 05:13:12 env -- env/env.sh@15 -- # uname 00:07:08.340 05:13:12 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:08.340 05:13:12 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:08.340 05:13:12 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:08.340 05:13:12 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:08.340 05:13:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.340 05:13:12 env -- common/autotest_common.sh@10 -- # set +x 00:07:08.340 ************************************ 00:07:08.340 START TEST env_dpdk_post_init 00:07:08.340 ************************************ 00:07:08.340 05:13:12 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:08.340 EAL: Detected CPU lcores: 10 00:07:08.340 EAL: Detected NUMA nodes: 1 00:07:08.340 EAL: Detected shared linkage of DPDK 00:07:08.340 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:08.340 EAL: Selected IOVA mode 'PA' 00:07:08.340 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:08.340 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:08.340 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:08.598 Starting DPDK initialization... 00:07:08.598 Starting SPDK post initialization... 00:07:08.598 SPDK NVMe probe 00:07:08.598 Attaching to 0000:00:10.0 00:07:08.598 Attaching to 0000:00:11.0 00:07:08.598 Attached to 0000:00:10.0 00:07:08.598 Attached to 0000:00:11.0 00:07:08.598 Cleaning up... 00:07:08.598 00:07:08.598 real 0m0.175s 00:07:08.598 user 0m0.043s 00:07:08.598 sys 0m0.033s 00:07:08.598 05:13:12 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.598 05:13:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:08.598 ************************************ 00:07:08.598 END TEST env_dpdk_post_init 00:07:08.598 ************************************ 00:07:08.598 05:13:12 env -- env/env.sh@26 -- # uname 00:07:08.598 05:13:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:08.598 05:13:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:08.598 05:13:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.598 05:13:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.598 05:13:12 env -- common/autotest_common.sh@10 -- # set +x 00:07:08.598 ************************************ 00:07:08.598 START TEST env_mem_callbacks 00:07:08.598 ************************************ 00:07:08.598 05:13:12 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:08.598 EAL: Detected CPU lcores: 10 00:07:08.598 EAL: Detected NUMA nodes: 1 00:07:08.598 EAL: Detected shared linkage of DPDK 00:07:08.598 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:08.598 EAL: Selected IOVA mode 'PA' 00:07:08.598 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:08.598 00:07:08.598 00:07:08.598 CUnit - A unit testing framework for C - Version 2.1-3 00:07:08.598 http://cunit.sourceforge.net/ 00:07:08.598 00:07:08.598 00:07:08.598 Suite: memory 00:07:08.598 Test: test ... 00:07:08.598 register 0x200000200000 2097152 00:07:08.598 malloc 3145728 00:07:08.598 register 0x200000400000 4194304 00:07:08.598 buf 0x200000500000 len 3145728 PASSED 00:07:08.598 malloc 64 00:07:08.598 buf 0x2000004fff40 len 64 PASSED 00:07:08.598 malloc 4194304 00:07:08.598 register 0x200000800000 6291456 00:07:08.598 buf 0x200000a00000 len 4194304 PASSED 00:07:08.598 free 0x200000500000 3145728 00:07:08.598 free 0x2000004fff40 64 00:07:08.598 unregister 0x200000400000 4194304 PASSED 00:07:08.598 free 0x200000a00000 4194304 00:07:08.598 unregister 0x200000800000 6291456 PASSED 00:07:08.598 malloc 8388608 00:07:08.598 register 0x200000400000 10485760 00:07:08.598 buf 0x200000600000 len 8388608 PASSED 00:07:08.598 free 0x200000600000 8388608 00:07:08.598 unregister 0x200000400000 10485760 PASSED 00:07:08.598 passed 00:07:08.598 00:07:08.598 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.598 suites 1 1 n/a 0 0 00:07:08.598 tests 1 1 1 0 0 00:07:08.598 asserts 15 15 15 0 n/a 00:07:08.598 00:07:08.598 Elapsed time = 0.008 seconds 00:07:08.598 00:07:08.598 real 0m0.142s 00:07:08.598 user 0m0.018s 00:07:08.598 sys 0m0.023s 00:07:08.598 05:13:12 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.598 ************************************ 00:07:08.598 END TEST env_mem_callbacks 00:07:08.598 ************************************ 00:07:08.598 05:13:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:08.598 00:07:08.598 real 0m1.808s 00:07:08.598 user 0m0.847s 00:07:08.598 sys 0m0.606s 00:07:08.598 05:13:12 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.598 ************************************ 00:07:08.598 END TEST env 00:07:08.598 05:13:12 env -- common/autotest_common.sh@10 -- # set +x 00:07:08.598 ************************************ 00:07:08.856 05:13:12 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:08.856 05:13:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.856 05:13:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.856 05:13:12 -- common/autotest_common.sh@10 -- # set +x 00:07:08.856 ************************************ 00:07:08.856 START TEST rpc 00:07:08.857 ************************************ 00:07:08.857 05:13:12 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:08.857 * Looking for test storage... 00:07:08.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:08.857 05:13:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60774 00:07:08.857 05:13:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:08.857 05:13:12 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:08.857 05:13:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60774 00:07:08.857 05:13:12 rpc -- common/autotest_common.sh@831 -- # '[' -z 60774 ']' 00:07:08.857 05:13:12 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.857 05:13:12 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.857 05:13:12 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.857 05:13:12 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.857 05:13:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.857 [2024-07-25 05:13:12.949528] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:07:08.857 [2024-07-25 05:13:12.949631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60774 ] 00:07:09.115 [2024-07-25 05:13:13.086374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.115 [2024-07-25 05:13:13.158135] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:09.115 [2024-07-25 05:13:13.158197] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60774' to capture a snapshot of events at runtime. 00:07:09.115 [2024-07-25 05:13:13.158211] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.115 [2024-07-25 05:13:13.158222] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.115 [2024-07-25 05:13:13.158231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60774 for offline analysis/debug. 00:07:09.115 [2024-07-25 05:13:13.158286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.050 05:13:13 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.050 05:13:13 rpc -- common/autotest_common.sh@864 -- # return 0 00:07:10.050 05:13:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:10.050 05:13:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:10.050 05:13:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:10.050 05:13:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:10.050 05:13:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.050 05:13:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.050 05:13:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.050 ************************************ 00:07:10.050 START TEST rpc_integrity 00:07:10.050 ************************************ 00:07:10.050 05:13:13 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:10.050 05:13:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:10.050 05:13:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.050 05:13:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.050 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.050 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:10.050 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:10.050 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:10.050 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:10.050 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.050 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.050 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.050 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:10.050 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:10.050 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.050 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.050 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.050 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:10.050 { 00:07:10.050 "aliases": [ 00:07:10.050 "ef2eea3a-39f4-4dbe-9bd3-45aafd86fc95" 00:07:10.050 ], 00:07:10.050 "assigned_rate_limits": { 00:07:10.050 "r_mbytes_per_sec": 0, 00:07:10.050 "rw_ios_per_sec": 0, 00:07:10.050 "rw_mbytes_per_sec": 0, 00:07:10.050 "w_mbytes_per_sec": 0 00:07:10.050 }, 00:07:10.050 "block_size": 512, 00:07:10.050 "claimed": false, 00:07:10.050 "driver_specific": {}, 00:07:10.050 "memory_domains": [ 00:07:10.050 { 00:07:10.050 "dma_device_id": "system", 00:07:10.050 "dma_device_type": 1 00:07:10.050 }, 00:07:10.050 { 00:07:10.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.050 "dma_device_type": 2 00:07:10.050 } 00:07:10.050 ], 00:07:10.050 "name": "Malloc0", 00:07:10.050 "num_blocks": 16384, 00:07:10.050 "product_name": "Malloc disk", 00:07:10.050 "supported_io_types": { 00:07:10.050 "abort": true, 00:07:10.050 "compare": false, 00:07:10.050 "compare_and_write": false, 00:07:10.050 "copy": true, 00:07:10.050 "flush": true, 00:07:10.050 "get_zone_info": false, 00:07:10.050 "nvme_admin": false, 00:07:10.050 "nvme_io": false, 00:07:10.050 "nvme_io_md": false, 00:07:10.050 "nvme_iov_md": false, 00:07:10.050 "read": true, 00:07:10.050 "reset": true, 00:07:10.050 "seek_data": false, 00:07:10.050 "seek_hole": false, 00:07:10.050 "unmap": true, 00:07:10.050 "write": true, 00:07:10.050 "write_zeroes": true, 00:07:10.050 "zcopy": true, 00:07:10.050 "zone_append": false, 00:07:10.050 "zone_management": false 00:07:10.050 }, 00:07:10.050 "uuid": "ef2eea3a-39f4-4dbe-9bd3-45aafd86fc95", 00:07:10.050 "zoned": false 00:07:10.050 } 00:07:10.050 ]' 00:07:10.050 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:10.050 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:10.050 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:10.050 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.050 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.050 [2024-07-25 05:13:14.151754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:10.050 [2024-07-25 05:13:14.151812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.050 [2024-07-25 05:13:14.151833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x114dad0 00:07:10.050 [2024-07-25 05:13:14.151843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.050 [2024-07-25 05:13:14.153519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.050 [2024-07-25 05:13:14.153559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:10.050 Passthru0 00:07:10.050 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.050 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:10.050 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.050 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.050 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.050 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:10.050 { 00:07:10.050 "aliases": [ 00:07:10.050 "ef2eea3a-39f4-4dbe-9bd3-45aafd86fc95" 00:07:10.050 ], 00:07:10.050 "assigned_rate_limits": { 00:07:10.050 "r_mbytes_per_sec": 0, 00:07:10.050 "rw_ios_per_sec": 0, 00:07:10.050 "rw_mbytes_per_sec": 0, 00:07:10.050 "w_mbytes_per_sec": 0 00:07:10.050 }, 00:07:10.050 "block_size": 512, 00:07:10.050 "claim_type": "exclusive_write", 00:07:10.050 "claimed": true, 00:07:10.050 "driver_specific": {}, 00:07:10.050 "memory_domains": [ 00:07:10.050 { 00:07:10.050 "dma_device_id": "system", 00:07:10.050 "dma_device_type": 1 00:07:10.050 }, 00:07:10.050 { 00:07:10.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.050 "dma_device_type": 2 00:07:10.050 } 00:07:10.050 ], 00:07:10.050 "name": "Malloc0", 00:07:10.050 "num_blocks": 16384, 00:07:10.050 "product_name": "Malloc disk", 00:07:10.050 "supported_io_types": { 00:07:10.050 "abort": true, 00:07:10.050 "compare": false, 00:07:10.050 "compare_and_write": false, 00:07:10.050 "copy": true, 00:07:10.050 "flush": true, 00:07:10.050 "get_zone_info": false, 00:07:10.050 "nvme_admin": false, 00:07:10.050 "nvme_io": false, 00:07:10.050 "nvme_io_md": false, 00:07:10.050 "nvme_iov_md": false, 00:07:10.050 "read": true, 00:07:10.050 "reset": true, 00:07:10.050 "seek_data": false, 00:07:10.050 "seek_hole": false, 00:07:10.050 "unmap": true, 00:07:10.050 "write": true, 00:07:10.050 "write_zeroes": true, 00:07:10.050 "zcopy": true, 00:07:10.050 "zone_append": false, 00:07:10.050 "zone_management": false 00:07:10.050 }, 00:07:10.050 "uuid": "ef2eea3a-39f4-4dbe-9bd3-45aafd86fc95", 00:07:10.050 "zoned": false 00:07:10.050 }, 00:07:10.050 { 00:07:10.050 "aliases": [ 00:07:10.050 "5ceaf93b-5f6f-5e1a-ab62-ebf5885e2924" 00:07:10.050 ], 00:07:10.050 "assigned_rate_limits": { 00:07:10.050 "r_mbytes_per_sec": 0, 00:07:10.050 "rw_ios_per_sec": 0, 00:07:10.050 "rw_mbytes_per_sec": 0, 00:07:10.050 "w_mbytes_per_sec": 0 00:07:10.050 }, 00:07:10.050 "block_size": 512, 00:07:10.050 "claimed": false, 00:07:10.050 "driver_specific": { 00:07:10.050 "passthru": { 00:07:10.050 "base_bdev_name": "Malloc0", 00:07:10.050 "name": "Passthru0" 00:07:10.051 } 00:07:10.051 }, 00:07:10.051 "memory_domains": [ 00:07:10.051 { 00:07:10.051 "dma_device_id": "system", 00:07:10.051 "dma_device_type": 1 00:07:10.051 }, 00:07:10.051 { 00:07:10.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.051 "dma_device_type": 2 00:07:10.051 } 00:07:10.051 ], 00:07:10.051 "name": "Passthru0", 00:07:10.051 "num_blocks": 16384, 00:07:10.051 "product_name": "passthru", 00:07:10.051 "supported_io_types": { 00:07:10.051 "abort": true, 00:07:10.051 "compare": false, 00:07:10.051 "compare_and_write": false, 00:07:10.051 "copy": true, 00:07:10.051 "flush": true, 00:07:10.051 "get_zone_info": false, 00:07:10.051 "nvme_admin": false, 00:07:10.051 "nvme_io": false, 00:07:10.051 "nvme_io_md": false, 00:07:10.051 "nvme_iov_md": false, 00:07:10.051 "read": true, 00:07:10.051 "reset": true, 00:07:10.051 "seek_data": false, 00:07:10.051 "seek_hole": false, 00:07:10.051 "unmap": true, 00:07:10.051 "write": true, 00:07:10.051 "write_zeroes": true, 00:07:10.051 "zcopy": true, 00:07:10.051 "zone_append": false, 00:07:10.051 "zone_management": false 00:07:10.051 }, 00:07:10.051 "uuid": "5ceaf93b-5f6f-5e1a-ab62-ebf5885e2924", 00:07:10.051 "zoned": false 00:07:10.051 } 00:07:10.051 ]' 00:07:10.051 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:10.309 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:10.309 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:10.309 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.309 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.309 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.309 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:10.309 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.309 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.309 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.309 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:10.309 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.309 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.309 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.309 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:10.310 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:10.310 05:13:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:10.310 00:07:10.310 real 0m0.315s 00:07:10.310 user 0m0.200s 00:07:10.310 sys 0m0.043s 00:07:10.310 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.310 05:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.310 ************************************ 00:07:10.310 END TEST rpc_integrity 00:07:10.310 ************************************ 00:07:10.310 05:13:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:10.310 05:13:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.310 05:13:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.310 05:13:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.310 ************************************ 00:07:10.310 START TEST rpc_plugins 00:07:10.310 ************************************ 00:07:10.310 05:13:14 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:07:10.310 05:13:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:10.310 05:13:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.310 05:13:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:10.310 05:13:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.310 05:13:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:10.310 05:13:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:10.310 05:13:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.310 05:13:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:10.310 05:13:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.310 05:13:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:10.310 { 00:07:10.310 "aliases": [ 00:07:10.310 "6909ff44-9f1c-4d3f-a6af-e43d1562a0bf" 00:07:10.310 ], 00:07:10.310 "assigned_rate_limits": { 00:07:10.310 "r_mbytes_per_sec": 0, 00:07:10.310 "rw_ios_per_sec": 0, 00:07:10.310 "rw_mbytes_per_sec": 0, 00:07:10.310 "w_mbytes_per_sec": 0 00:07:10.310 }, 00:07:10.310 "block_size": 4096, 00:07:10.310 "claimed": false, 00:07:10.310 "driver_specific": {}, 00:07:10.310 "memory_domains": [ 00:07:10.310 { 00:07:10.310 "dma_device_id": "system", 00:07:10.310 "dma_device_type": 1 00:07:10.310 }, 00:07:10.310 { 00:07:10.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.310 "dma_device_type": 2 00:07:10.310 } 00:07:10.310 ], 00:07:10.310 "name": "Malloc1", 00:07:10.310 "num_blocks": 256, 00:07:10.310 "product_name": "Malloc disk", 00:07:10.310 "supported_io_types": { 00:07:10.310 "abort": true, 00:07:10.310 "compare": false, 00:07:10.310 "compare_and_write": false, 00:07:10.310 "copy": true, 00:07:10.310 "flush": true, 00:07:10.310 "get_zone_info": false, 00:07:10.310 "nvme_admin": false, 00:07:10.310 "nvme_io": false, 00:07:10.310 "nvme_io_md": false, 00:07:10.310 "nvme_iov_md": false, 00:07:10.310 "read": true, 00:07:10.310 "reset": true, 00:07:10.310 "seek_data": false, 00:07:10.310 "seek_hole": false, 00:07:10.310 "unmap": true, 00:07:10.310 "write": true, 00:07:10.310 "write_zeroes": true, 00:07:10.310 "zcopy": true, 00:07:10.310 "zone_append": false, 00:07:10.310 "zone_management": false 00:07:10.310 }, 00:07:10.310 "uuid": "6909ff44-9f1c-4d3f-a6af-e43d1562a0bf", 00:07:10.310 "zoned": false 00:07:10.310 } 00:07:10.310 ]' 00:07:10.310 05:13:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:10.310 05:13:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:10.310 05:13:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:10.310 05:13:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.310 05:13:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:10.310 05:13:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.310 05:13:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:10.310 05:13:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.310 05:13:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:10.310 05:13:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.310 05:13:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:10.310 05:13:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:10.569 05:13:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:10.569 00:07:10.569 real 0m0.166s 00:07:10.569 user 0m0.114s 00:07:10.569 sys 0m0.016s 00:07:10.569 05:13:14 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.569 05:13:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:10.569 ************************************ 00:07:10.569 END TEST rpc_plugins 00:07:10.569 ************************************ 00:07:10.569 05:13:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:10.569 05:13:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.569 05:13:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.569 05:13:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.569 ************************************ 00:07:10.569 START TEST rpc_trace_cmd_test 00:07:10.569 ************************************ 00:07:10.569 05:13:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:07:10.569 05:13:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:10.569 05:13:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:10.569 05:13:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.569 05:13:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.569 05:13:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.569 05:13:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:10.569 "bdev": { 00:07:10.569 "mask": "0x8", 00:07:10.569 "tpoint_mask": "0xffffffffffffffff" 00:07:10.569 }, 00:07:10.569 "bdev_nvme": { 00:07:10.569 "mask": "0x4000", 00:07:10.569 "tpoint_mask": "0x0" 00:07:10.569 }, 00:07:10.569 "blobfs": { 00:07:10.569 "mask": "0x80", 00:07:10.569 "tpoint_mask": "0x0" 00:07:10.569 }, 00:07:10.569 "dsa": { 00:07:10.569 "mask": "0x200", 00:07:10.569 "tpoint_mask": "0x0" 00:07:10.569 }, 00:07:10.569 "ftl": { 00:07:10.569 "mask": "0x40", 00:07:10.569 "tpoint_mask": "0x0" 00:07:10.569 }, 00:07:10.569 "iaa": { 00:07:10.569 "mask": "0x1000", 00:07:10.569 "tpoint_mask": "0x0" 00:07:10.569 }, 00:07:10.569 "iscsi_conn": { 00:07:10.569 "mask": "0x2", 00:07:10.569 "tpoint_mask": "0x0" 00:07:10.569 }, 00:07:10.569 "nvme_pcie": { 00:07:10.569 "mask": "0x800", 00:07:10.569 "tpoint_mask": "0x0" 00:07:10.569 }, 00:07:10.569 "nvme_tcp": { 00:07:10.569 "mask": "0x2000", 00:07:10.569 "tpoint_mask": "0x0" 00:07:10.569 }, 00:07:10.569 "nvmf_rdma": { 00:07:10.569 "mask": "0x10", 00:07:10.569 "tpoint_mask": "0x0" 00:07:10.569 }, 00:07:10.569 "nvmf_tcp": { 00:07:10.569 "mask": "0x20", 00:07:10.569 "tpoint_mask": "0x0" 00:07:10.569 }, 00:07:10.569 "scsi": { 00:07:10.569 "mask": "0x4", 00:07:10.569 "tpoint_mask": "0x0" 00:07:10.569 }, 00:07:10.569 "sock": { 00:07:10.569 "mask": "0x8000", 00:07:10.569 "tpoint_mask": "0x0" 00:07:10.569 }, 00:07:10.569 "thread": { 00:07:10.569 "mask": "0x400", 00:07:10.569 "tpoint_mask": "0x0" 00:07:10.569 }, 00:07:10.569 "tpoint_group_mask": "0x8", 00:07:10.569 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60774" 00:07:10.569 }' 00:07:10.569 05:13:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:10.569 05:13:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:07:10.569 05:13:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:10.569 05:13:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:10.569 05:13:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:10.836 05:13:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:10.836 05:13:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:10.836 05:13:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:10.836 05:13:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:10.836 05:13:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:10.836 00:07:10.836 real 0m0.277s 00:07:10.836 user 0m0.238s 00:07:10.836 sys 0m0.026s 00:07:10.836 05:13:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.836 05:13:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.836 ************************************ 00:07:10.836 END TEST rpc_trace_cmd_test 00:07:10.836 ************************************ 00:07:10.836 05:13:14 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:07:10.836 05:13:14 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:07:10.836 05:13:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.836 05:13:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.836 05:13:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.836 ************************************ 00:07:10.836 START TEST go_rpc 00:07:10.836 ************************************ 00:07:10.836 05:13:14 rpc.go_rpc -- common/autotest_common.sh@1125 -- # go_rpc 00:07:10.836 05:13:14 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:07:10.836 05:13:14 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:07:10.836 05:13:14 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:07:10.836 05:13:14 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:07:10.836 05:13:14 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:07:10.836 05:13:14 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.836 05:13:14 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.836 05:13:14 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.836 05:13:14 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:07:10.836 05:13:14 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:07:10.836 05:13:14 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["7696c8c0-907a-49dc-a6be-f22270890b82"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"7696c8c0-907a-49dc-a6be-f22270890b82","zoned":false}]' 00:07:10.836 05:13:14 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:07:11.095 05:13:15 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:07:11.095 05:13:15 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:11.095 05:13:15 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.095 05:13:15 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.095 05:13:15 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.095 05:13:15 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:07:11.095 05:13:15 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:07:11.095 05:13:15 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:07:11.095 05:13:15 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:07:11.095 00:07:11.095 real 0m0.209s 00:07:11.095 user 0m0.152s 00:07:11.095 sys 0m0.030s 00:07:11.095 05:13:15 rpc.go_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.095 ************************************ 00:07:11.095 END TEST go_rpc 00:07:11.095 ************************************ 00:07:11.095 05:13:15 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.095 05:13:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:11.095 05:13:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:11.095 05:13:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.095 05:13:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.095 05:13:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.095 ************************************ 00:07:11.095 START TEST rpc_daemon_integrity 00:07:11.095 ************************************ 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:11.095 { 00:07:11.095 "aliases": [ 00:07:11.095 "9c8b1883-c52f-4dd9-9036-6079bcc38d65" 00:07:11.095 ], 00:07:11.095 "assigned_rate_limits": { 00:07:11.095 "r_mbytes_per_sec": 0, 00:07:11.095 "rw_ios_per_sec": 0, 00:07:11.095 "rw_mbytes_per_sec": 0, 00:07:11.095 "w_mbytes_per_sec": 0 00:07:11.095 }, 00:07:11.095 "block_size": 512, 00:07:11.095 "claimed": false, 00:07:11.095 "driver_specific": {}, 00:07:11.095 "memory_domains": [ 00:07:11.095 { 00:07:11.095 "dma_device_id": "system", 00:07:11.095 "dma_device_type": 1 00:07:11.095 }, 00:07:11.095 { 00:07:11.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.095 "dma_device_type": 2 00:07:11.095 } 00:07:11.095 ], 00:07:11.095 "name": "Malloc3", 00:07:11.095 "num_blocks": 16384, 00:07:11.095 "product_name": "Malloc disk", 00:07:11.095 "supported_io_types": { 00:07:11.095 "abort": true, 00:07:11.095 "compare": false, 00:07:11.095 "compare_and_write": false, 00:07:11.095 "copy": true, 00:07:11.095 "flush": true, 00:07:11.095 "get_zone_info": false, 00:07:11.095 "nvme_admin": false, 00:07:11.095 "nvme_io": false, 00:07:11.095 "nvme_io_md": false, 00:07:11.095 "nvme_iov_md": false, 00:07:11.095 "read": true, 00:07:11.095 "reset": true, 00:07:11.095 "seek_data": false, 00:07:11.095 "seek_hole": false, 00:07:11.095 "unmap": true, 00:07:11.095 "write": true, 00:07:11.095 "write_zeroes": true, 00:07:11.095 "zcopy": true, 00:07:11.095 "zone_append": false, 00:07:11.095 "zone_management": false 00:07:11.095 }, 00:07:11.095 "uuid": "9c8b1883-c52f-4dd9-9036-6079bcc38d65", 00:07:11.095 "zoned": false 00:07:11.095 } 00:07:11.095 ]' 00:07:11.095 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:11.374 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:11.374 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:07:11.374 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.374 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.374 [2024-07-25 05:13:15.296190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:11.374 [2024-07-25 05:13:15.296251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.374 [2024-07-25 05:13:15.296275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1344d70 00:07:11.374 [2024-07-25 05:13:15.296285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.374 [2024-07-25 05:13:15.297786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.374 [2024-07-25 05:13:15.297834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:11.374 Passthru0 00:07:11.374 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.374 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:11.374 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.374 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.374 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.374 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:11.374 { 00:07:11.374 "aliases": [ 00:07:11.374 "9c8b1883-c52f-4dd9-9036-6079bcc38d65" 00:07:11.374 ], 00:07:11.374 "assigned_rate_limits": { 00:07:11.374 "r_mbytes_per_sec": 0, 00:07:11.374 "rw_ios_per_sec": 0, 00:07:11.374 "rw_mbytes_per_sec": 0, 00:07:11.374 "w_mbytes_per_sec": 0 00:07:11.374 }, 00:07:11.374 "block_size": 512, 00:07:11.374 "claim_type": "exclusive_write", 00:07:11.374 "claimed": true, 00:07:11.374 "driver_specific": {}, 00:07:11.374 "memory_domains": [ 00:07:11.374 { 00:07:11.374 "dma_device_id": "system", 00:07:11.374 "dma_device_type": 1 00:07:11.374 }, 00:07:11.374 { 00:07:11.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.374 "dma_device_type": 2 00:07:11.374 } 00:07:11.374 ], 00:07:11.374 "name": "Malloc3", 00:07:11.375 "num_blocks": 16384, 00:07:11.375 "product_name": "Malloc disk", 00:07:11.375 "supported_io_types": { 00:07:11.375 "abort": true, 00:07:11.375 "compare": false, 00:07:11.375 "compare_and_write": false, 00:07:11.375 "copy": true, 00:07:11.375 "flush": true, 00:07:11.375 "get_zone_info": false, 00:07:11.375 "nvme_admin": false, 00:07:11.375 "nvme_io": false, 00:07:11.375 "nvme_io_md": false, 00:07:11.375 "nvme_iov_md": false, 00:07:11.375 "read": true, 00:07:11.375 "reset": true, 00:07:11.375 "seek_data": false, 00:07:11.375 "seek_hole": false, 00:07:11.375 "unmap": true, 00:07:11.375 "write": true, 00:07:11.375 "write_zeroes": true, 00:07:11.375 "zcopy": true, 00:07:11.375 "zone_append": false, 00:07:11.375 "zone_management": false 00:07:11.375 }, 00:07:11.375 "uuid": "9c8b1883-c52f-4dd9-9036-6079bcc38d65", 00:07:11.375 "zoned": false 00:07:11.375 }, 00:07:11.376 { 00:07:11.376 "aliases": [ 00:07:11.376 "a9dba025-a56e-5433-a6c2-c7558ddd2fca" 00:07:11.376 ], 00:07:11.376 "assigned_rate_limits": { 00:07:11.376 "r_mbytes_per_sec": 0, 00:07:11.376 "rw_ios_per_sec": 0, 00:07:11.376 "rw_mbytes_per_sec": 0, 00:07:11.376 "w_mbytes_per_sec": 0 00:07:11.376 }, 00:07:11.376 "block_size": 512, 00:07:11.376 "claimed": false, 00:07:11.376 "driver_specific": { 00:07:11.376 "passthru": { 00:07:11.376 "base_bdev_name": "Malloc3", 00:07:11.376 "name": "Passthru0" 00:07:11.376 } 00:07:11.376 }, 00:07:11.376 "memory_domains": [ 00:07:11.376 { 00:07:11.376 "dma_device_id": "system", 00:07:11.376 "dma_device_type": 1 00:07:11.376 }, 00:07:11.376 { 00:07:11.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.376 "dma_device_type": 2 00:07:11.376 } 00:07:11.376 ], 00:07:11.376 "name": "Passthru0", 00:07:11.376 "num_blocks": 16384, 00:07:11.376 "product_name": "passthru", 00:07:11.376 "supported_io_types": { 00:07:11.376 "abort": true, 00:07:11.376 "compare": false, 00:07:11.376 "compare_and_write": false, 00:07:11.376 "copy": true, 00:07:11.376 "flush": true, 00:07:11.376 "get_zone_info": false, 00:07:11.376 "nvme_admin": false, 00:07:11.376 "nvme_io": false, 00:07:11.376 "nvme_io_md": false, 00:07:11.376 "nvme_iov_md": false, 00:07:11.376 "read": true, 00:07:11.377 "reset": true, 00:07:11.377 "seek_data": false, 00:07:11.377 "seek_hole": false, 00:07:11.377 "unmap": true, 00:07:11.377 "write": true, 00:07:11.377 "write_zeroes": true, 00:07:11.377 "zcopy": true, 00:07:11.377 "zone_append": false, 00:07:11.377 "zone_management": false 00:07:11.377 }, 00:07:11.377 "uuid": "a9dba025-a56e-5433-a6c2-c7558ddd2fca", 00:07:11.377 "zoned": false 00:07:11.377 } 00:07:11.377 ]' 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:11.377 ************************************ 00:07:11.377 END TEST rpc_daemon_integrity 00:07:11.377 ************************************ 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:11.377 00:07:11.377 real 0m0.324s 00:07:11.377 user 0m0.213s 00:07:11.377 sys 0m0.039s 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.377 05:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.377 05:13:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:11.377 05:13:15 rpc -- rpc/rpc.sh@84 -- # killprocess 60774 00:07:11.377 05:13:15 rpc -- common/autotest_common.sh@950 -- # '[' -z 60774 ']' 00:07:11.377 05:13:15 rpc -- common/autotest_common.sh@954 -- # kill -0 60774 00:07:11.377 05:13:15 rpc -- common/autotest_common.sh@955 -- # uname 00:07:11.377 05:13:15 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.378 05:13:15 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60774 00:07:11.378 killing process with pid 60774 00:07:11.378 05:13:15 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.378 05:13:15 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.378 05:13:15 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60774' 00:07:11.378 05:13:15 rpc -- common/autotest_common.sh@969 -- # kill 60774 00:07:11.378 05:13:15 rpc -- common/autotest_common.sh@974 -- # wait 60774 00:07:11.642 00:07:11.642 real 0m3.000s 00:07:11.642 user 0m4.134s 00:07:11.642 sys 0m0.643s 00:07:11.642 05:13:15 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.642 05:13:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.642 ************************************ 00:07:11.642 END TEST rpc 00:07:11.642 ************************************ 00:07:11.900 05:13:15 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:11.900 05:13:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.900 05:13:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.900 05:13:15 -- common/autotest_common.sh@10 -- # set +x 00:07:11.900 ************************************ 00:07:11.900 START TEST skip_rpc 00:07:11.900 ************************************ 00:07:11.900 05:13:15 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:11.900 * Looking for test storage... 00:07:11.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:11.900 05:13:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:11.900 05:13:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:11.900 05:13:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:11.900 05:13:15 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.900 05:13:15 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.900 05:13:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.900 ************************************ 00:07:11.900 START TEST skip_rpc 00:07:11.900 ************************************ 00:07:11.900 05:13:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:07:11.900 05:13:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=61035 00:07:11.900 05:13:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:11.900 05:13:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.900 05:13:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:11.900 [2024-07-25 05:13:15.984660] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:07:11.900 [2024-07-25 05:13:15.984747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61035 ] 00:07:12.159 [2024-07-25 05:13:16.142510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.159 [2024-07-25 05:13:16.212888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.465 2024/07/25 05:13:20 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 61035 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 61035 ']' 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 61035 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61035 00:07:17.465 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.466 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.466 killing process with pid 61035 00:07:17.466 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61035' 00:07:17.466 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 61035 00:07:17.466 05:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 61035 00:07:17.466 00:07:17.466 real 0m5.306s 00:07:17.466 user 0m5.019s 00:07:17.466 sys 0m0.180s 00:07:17.466 05:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.466 05:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.466 ************************************ 00:07:17.466 END TEST skip_rpc 00:07:17.466 ************************************ 00:07:17.466 05:13:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:17.466 05:13:21 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.466 05:13:21 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.466 05:13:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.466 ************************************ 00:07:17.466 START TEST skip_rpc_with_json 00:07:17.466 ************************************ 00:07:17.466 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:07:17.466 05:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:17.466 05:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61122 00:07:17.466 05:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:17.466 05:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:17.466 05:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 61122 00:07:17.466 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 61122 ']' 00:07:17.466 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.466 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.466 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.466 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.466 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:17.466 [2024-07-25 05:13:21.333851] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:07:17.466 [2024-07-25 05:13:21.333936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61122 ] 00:07:17.466 [2024-07-25 05:13:21.475108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.466 [2024-07-25 05:13:21.544728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.724 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.724 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:07:17.724 05:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:17.724 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.724 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:17.724 [2024-07-25 05:13:21.720487] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:17.724 2024/07/25 05:13:21 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:07:17.724 request: 00:07:17.724 { 00:07:17.724 "method": "nvmf_get_transports", 00:07:17.724 "params": { 00:07:17.724 "trtype": "tcp" 00:07:17.724 } 00:07:17.724 } 00:07:17.724 Got JSON-RPC error response 00:07:17.724 GoRPCClient: error on JSON-RPC call 00:07:17.724 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:17.725 05:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:17.725 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.725 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:17.725 [2024-07-25 05:13:21.732698] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.725 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.725 05:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:17.725 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.725 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:17.725 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.725 05:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:17.983 { 00:07:17.983 "subsystems": [ 00:07:17.983 { 00:07:17.983 "subsystem": "keyring", 00:07:17.983 "config": [] 00:07:17.983 }, 00:07:17.983 { 00:07:17.983 "subsystem": "iobuf", 00:07:17.983 "config": [ 00:07:17.983 { 00:07:17.983 "method": "iobuf_set_options", 00:07:17.983 "params": { 00:07:17.983 "large_bufsize": 135168, 00:07:17.983 "large_pool_count": 1024, 00:07:17.983 "small_bufsize": 8192, 00:07:17.983 "small_pool_count": 8192 00:07:17.983 } 00:07:17.983 } 00:07:17.983 ] 00:07:17.983 }, 00:07:17.983 { 00:07:17.983 "subsystem": "sock", 00:07:17.983 "config": [ 00:07:17.983 { 00:07:17.983 "method": "sock_set_default_impl", 00:07:17.983 "params": { 00:07:17.983 "impl_name": "posix" 00:07:17.983 } 00:07:17.983 }, 00:07:17.983 { 00:07:17.983 "method": "sock_impl_set_options", 00:07:17.983 "params": { 00:07:17.983 "enable_ktls": false, 00:07:17.983 "enable_placement_id": 0, 00:07:17.983 "enable_quickack": false, 00:07:17.983 "enable_recv_pipe": true, 00:07:17.983 "enable_zerocopy_send_client": false, 00:07:17.983 "enable_zerocopy_send_server": true, 00:07:17.983 "impl_name": "ssl", 00:07:17.983 "recv_buf_size": 4096, 00:07:17.983 "send_buf_size": 4096, 00:07:17.983 "tls_version": 0, 00:07:17.983 "zerocopy_threshold": 0 00:07:17.983 } 00:07:17.983 }, 00:07:17.983 { 00:07:17.983 "method": "sock_impl_set_options", 00:07:17.983 "params": { 00:07:17.983 "enable_ktls": false, 00:07:17.983 "enable_placement_id": 0, 00:07:17.983 "enable_quickack": false, 00:07:17.983 "enable_recv_pipe": true, 00:07:17.983 "enable_zerocopy_send_client": false, 00:07:17.983 "enable_zerocopy_send_server": true, 00:07:17.983 "impl_name": "posix", 00:07:17.983 "recv_buf_size": 2097152, 00:07:17.983 "send_buf_size": 2097152, 00:07:17.983 "tls_version": 0, 00:07:17.983 "zerocopy_threshold": 0 00:07:17.983 } 00:07:17.983 } 00:07:17.983 ] 00:07:17.983 }, 00:07:17.983 { 00:07:17.983 "subsystem": "vmd", 00:07:17.983 "config": [] 00:07:17.983 }, 00:07:17.983 { 00:07:17.983 "subsystem": "accel", 00:07:17.983 "config": [ 00:07:17.983 { 00:07:17.983 "method": "accel_set_options", 00:07:17.983 "params": { 00:07:17.983 "buf_count": 2048, 00:07:17.983 "large_cache_size": 16, 00:07:17.983 "sequence_count": 2048, 00:07:17.983 "small_cache_size": 128, 00:07:17.983 "task_count": 2048 00:07:17.983 } 00:07:17.983 } 00:07:17.983 ] 00:07:17.983 }, 00:07:17.983 { 00:07:17.983 "subsystem": "bdev", 00:07:17.983 "config": [ 00:07:17.983 { 00:07:17.983 "method": "bdev_set_options", 00:07:17.983 "params": { 00:07:17.983 "bdev_auto_examine": true, 00:07:17.983 "bdev_io_cache_size": 256, 00:07:17.983 "bdev_io_pool_size": 65535, 00:07:17.983 "iobuf_large_cache_size": 16, 00:07:17.983 "iobuf_small_cache_size": 128 00:07:17.983 } 00:07:17.983 }, 00:07:17.983 { 00:07:17.983 "method": "bdev_raid_set_options", 00:07:17.983 "params": { 00:07:17.983 "process_max_bandwidth_mb_sec": 0, 00:07:17.983 "process_window_size_kb": 1024 00:07:17.983 } 00:07:17.983 }, 00:07:17.984 { 00:07:17.984 "method": "bdev_iscsi_set_options", 00:07:17.984 "params": { 00:07:17.984 "timeout_sec": 30 00:07:17.984 } 00:07:17.984 }, 00:07:17.984 { 00:07:17.984 "method": "bdev_nvme_set_options", 00:07:17.984 "params": { 00:07:17.984 "action_on_timeout": "none", 00:07:17.984 "allow_accel_sequence": false, 00:07:17.984 "arbitration_burst": 0, 00:07:17.984 "bdev_retry_count": 3, 00:07:17.984 "ctrlr_loss_timeout_sec": 0, 00:07:17.984 "delay_cmd_submit": true, 00:07:17.984 "dhchap_dhgroups": [ 00:07:17.984 "null", 00:07:17.984 "ffdhe2048", 00:07:17.984 "ffdhe3072", 00:07:17.984 "ffdhe4096", 00:07:17.984 "ffdhe6144", 00:07:17.984 "ffdhe8192" 00:07:17.984 ], 00:07:17.984 "dhchap_digests": [ 00:07:17.984 "sha256", 00:07:17.984 "sha384", 00:07:17.984 "sha512" 00:07:17.984 ], 00:07:17.984 "disable_auto_failback": false, 00:07:17.984 "fast_io_fail_timeout_sec": 0, 00:07:17.984 "generate_uuids": false, 00:07:17.984 "high_priority_weight": 0, 00:07:17.984 "io_path_stat": false, 00:07:17.984 "io_queue_requests": 0, 00:07:17.984 "keep_alive_timeout_ms": 10000, 00:07:17.984 "low_priority_weight": 0, 00:07:17.984 "medium_priority_weight": 0, 00:07:17.984 "nvme_adminq_poll_period_us": 10000, 00:07:17.984 "nvme_error_stat": false, 00:07:17.984 "nvme_ioq_poll_period_us": 0, 00:07:17.984 "rdma_cm_event_timeout_ms": 0, 00:07:17.984 "rdma_max_cq_size": 0, 00:07:17.984 "rdma_srq_size": 0, 00:07:17.984 "reconnect_delay_sec": 0, 00:07:17.984 "timeout_admin_us": 0, 00:07:17.984 "timeout_us": 0, 00:07:17.984 "transport_ack_timeout": 0, 00:07:17.984 "transport_retry_count": 4, 00:07:17.984 "transport_tos": 0 00:07:17.984 } 00:07:17.984 }, 00:07:17.984 { 00:07:17.984 "method": "bdev_nvme_set_hotplug", 00:07:17.984 "params": { 00:07:17.984 "enable": false, 00:07:17.984 "period_us": 100000 00:07:17.984 } 00:07:17.984 }, 00:07:17.984 { 00:07:17.984 "method": "bdev_wait_for_examine" 00:07:17.984 } 00:07:17.984 ] 00:07:17.984 }, 00:07:17.984 { 00:07:17.984 "subsystem": "scsi", 00:07:17.984 "config": null 00:07:17.984 }, 00:07:17.984 { 00:07:17.984 "subsystem": "scheduler", 00:07:17.984 "config": [ 00:07:17.984 { 00:07:17.984 "method": "framework_set_scheduler", 00:07:17.984 "params": { 00:07:17.984 "name": "static" 00:07:17.984 } 00:07:17.984 } 00:07:17.984 ] 00:07:17.984 }, 00:07:17.984 { 00:07:17.984 "subsystem": "vhost_scsi", 00:07:17.984 "config": [] 00:07:17.984 }, 00:07:17.984 { 00:07:17.984 "subsystem": "vhost_blk", 00:07:17.984 "config": [] 00:07:17.984 }, 00:07:17.984 { 00:07:17.984 "subsystem": "ublk", 00:07:17.984 "config": [] 00:07:17.984 }, 00:07:17.984 { 00:07:17.984 "subsystem": "nbd", 00:07:17.984 "config": [] 00:07:17.984 }, 00:07:17.984 { 00:07:17.984 "subsystem": "nvmf", 00:07:17.984 "config": [ 00:07:17.984 { 00:07:17.984 "method": "nvmf_set_config", 00:07:17.984 "params": { 00:07:17.984 "admin_cmd_passthru": { 00:07:17.984 "identify_ctrlr": false 00:07:17.984 }, 00:07:17.984 "discovery_filter": "match_any" 00:07:17.984 } 00:07:17.984 }, 00:07:17.984 { 00:07:17.984 "method": "nvmf_set_max_subsystems", 00:07:17.984 "params": { 00:07:17.984 "max_subsystems": 1024 00:07:17.984 } 00:07:17.984 }, 00:07:17.984 { 00:07:17.984 "method": "nvmf_set_crdt", 00:07:17.984 "params": { 00:07:17.984 "crdt1": 0, 00:07:17.984 "crdt2": 0, 00:07:17.984 "crdt3": 0 00:07:17.984 } 00:07:17.984 }, 00:07:17.984 { 00:07:17.984 "method": "nvmf_create_transport", 00:07:17.984 "params": { 00:07:17.984 "abort_timeout_sec": 1, 00:07:17.984 "ack_timeout": 0, 00:07:17.984 "buf_cache_size": 4294967295, 00:07:17.984 "c2h_success": true, 00:07:17.984 "data_wr_pool_size": 0, 00:07:17.984 "dif_insert_or_strip": false, 00:07:17.984 "in_capsule_data_size": 4096, 00:07:17.984 "io_unit_size": 131072, 00:07:17.984 "max_aq_depth": 128, 00:07:17.984 "max_io_qpairs_per_ctrlr": 127, 00:07:17.984 "max_io_size": 131072, 00:07:17.984 "max_queue_depth": 128, 00:07:17.984 "num_shared_buffers": 511, 00:07:17.984 "sock_priority": 0, 00:07:17.984 "trtype": "TCP", 00:07:17.984 "zcopy": false 00:07:17.984 } 00:07:17.984 } 00:07:17.984 ] 00:07:17.984 }, 00:07:17.984 { 00:07:17.984 "subsystem": "iscsi", 00:07:17.984 "config": [ 00:07:17.984 { 00:07:17.984 "method": "iscsi_set_options", 00:07:17.984 "params": { 00:07:17.984 "allow_duplicated_isid": false, 00:07:17.984 "chap_group": 0, 00:07:17.984 "data_out_pool_size": 2048, 00:07:17.984 "default_time2retain": 20, 00:07:17.984 "default_time2wait": 2, 00:07:17.984 "disable_chap": false, 00:07:17.984 "error_recovery_level": 0, 00:07:17.984 "first_burst_length": 8192, 00:07:17.984 "immediate_data": true, 00:07:17.984 "immediate_data_pool_size": 16384, 00:07:17.984 "max_connections_per_session": 2, 00:07:17.984 "max_large_datain_per_connection": 64, 00:07:17.984 "max_queue_depth": 64, 00:07:17.984 "max_r2t_per_connection": 4, 00:07:17.984 "max_sessions": 128, 00:07:17.984 "mutual_chap": false, 00:07:17.984 "node_base": "iqn.2016-06.io.spdk", 00:07:17.984 "nop_in_interval": 30, 00:07:17.984 "nop_timeout": 60, 00:07:17.984 "pdu_pool_size": 36864, 00:07:17.984 "require_chap": false 00:07:17.984 } 00:07:17.984 } 00:07:17.984 ] 00:07:17.984 } 00:07:17.984 ] 00:07:17.984 } 00:07:17.984 05:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:17.984 05:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 61122 00:07:17.984 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 61122 ']' 00:07:17.984 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 61122 00:07:17.984 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:17.984 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.984 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61122 00:07:17.984 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.984 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.984 killing process with pid 61122 00:07:17.984 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61122' 00:07:17.984 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 61122 00:07:17.984 05:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 61122 00:07:18.243 05:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61148 00:07:18.243 05:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:18.243 05:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61148 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 61148 ']' 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 61148 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61148 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.515 killing process with pid 61148 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61148' 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 61148 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 61148 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:23.515 00:07:23.515 real 0m6.192s 00:07:23.515 user 0m5.906s 00:07:23.515 sys 0m0.434s 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:23.515 ************************************ 00:07:23.515 END TEST skip_rpc_with_json 00:07:23.515 ************************************ 00:07:23.515 05:13:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:23.515 05:13:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.515 05:13:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.515 05:13:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.515 ************************************ 00:07:23.515 START TEST skip_rpc_with_delay 00:07:23.515 ************************************ 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:23.515 [2024-07-25 05:13:27.583166] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:23.515 [2024-07-25 05:13:27.583296] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:23.515 00:07:23.515 real 0m0.089s 00:07:23.515 user 0m0.053s 00:07:23.515 sys 0m0.035s 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.515 ************************************ 00:07:23.515 END TEST skip_rpc_with_delay 00:07:23.515 ************************************ 00:07:23.515 05:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:23.515 05:13:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:23.515 05:13:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:23.515 05:13:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:23.515 05:13:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.515 05:13:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.515 05:13:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.515 ************************************ 00:07:23.516 START TEST exit_on_failed_rpc_init 00:07:23.516 ************************************ 00:07:23.516 05:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:23.516 05:13:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61252 00:07:23.516 05:13:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61252 00:07:23.516 05:13:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:23.516 05:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 61252 ']' 00:07:23.516 05:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.516 05:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.516 05:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.516 05:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.516 05:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:23.773 [2024-07-25 05:13:27.723976] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:07:23.773 [2024-07-25 05:13:27.724071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61252 ] 00:07:23.774 [2024-07-25 05:13:27.865428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.774 [2024-07-25 05:13:27.938461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.709 05:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.709 05:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:24.709 05:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:24.709 05:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:24.709 05:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:24.709 05:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:24.709 05:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:24.709 05:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.709 05:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:24.709 05:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.709 05:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:24.709 05:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.709 05:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:24.709 05:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:24.709 05:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:24.709 [2024-07-25 05:13:28.802358] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:07:24.709 [2024-07-25 05:13:28.802454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61282 ] 00:07:24.967 [2024-07-25 05:13:28.938398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.967 [2024-07-25 05:13:29.008505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.967 [2024-07-25 05:13:29.008596] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:24.967 [2024-07-25 05:13:29.008615] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:24.967 [2024-07-25 05:13:29.008625] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.967 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:24.967 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.967 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:24.968 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:24.968 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:24.968 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.968 05:13:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:24.968 05:13:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61252 00:07:24.968 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 61252 ']' 00:07:24.968 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 61252 00:07:24.968 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:24.968 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.968 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61252 00:07:24.968 killing process with pid 61252 00:07:24.968 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.968 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.968 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61252' 00:07:24.968 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 61252 00:07:24.968 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 61252 00:07:25.225 00:07:25.225 real 0m1.727s 00:07:25.225 user 0m2.145s 00:07:25.225 sys 0m0.310s 00:07:25.225 ************************************ 00:07:25.225 END TEST exit_on_failed_rpc_init 00:07:25.225 ************************************ 00:07:25.225 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.225 05:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:25.484 05:13:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:25.484 00:07:25.484 real 0m13.585s 00:07:25.484 user 0m13.221s 00:07:25.484 sys 0m1.116s 00:07:25.484 ************************************ 00:07:25.484 END TEST skip_rpc 00:07:25.484 ************************************ 00:07:25.484 05:13:29 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.484 05:13:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.484 05:13:29 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:25.484 05:13:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.484 05:13:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.484 05:13:29 -- common/autotest_common.sh@10 -- # set +x 00:07:25.484 ************************************ 00:07:25.484 START TEST rpc_client 00:07:25.484 ************************************ 00:07:25.484 05:13:29 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:25.484 * Looking for test storage... 00:07:25.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:25.484 05:13:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:25.484 OK 00:07:25.484 05:13:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:25.484 00:07:25.484 real 0m0.101s 00:07:25.484 user 0m0.052s 00:07:25.484 sys 0m0.054s 00:07:25.484 ************************************ 00:07:25.484 END TEST rpc_client 00:07:25.484 ************************************ 00:07:25.484 05:13:29 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.484 05:13:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:25.484 05:13:29 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:25.484 05:13:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.484 05:13:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.484 05:13:29 -- common/autotest_common.sh@10 -- # set +x 00:07:25.484 ************************************ 00:07:25.484 START TEST json_config 00:07:25.484 ************************************ 00:07:25.484 05:13:29 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.743 05:13:29 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.743 05:13:29 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.743 05:13:29 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.743 05:13:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.743 05:13:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.743 05:13:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.743 05:13:29 json_config -- paths/export.sh@5 -- # export PATH 00:07:25.743 05:13:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@47 -- # : 0 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.743 05:13:29 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:07:25.743 INFO: JSON configuration test init 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:07:25.743 05:13:29 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:07:25.743 05:13:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.744 05:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.744 05:13:29 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:07:25.744 05:13:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.744 05:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.744 Waiting for target to run... 00:07:25.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:25.744 05:13:29 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:07:25.744 05:13:29 json_config -- json_config/common.sh@9 -- # local app=target 00:07:25.744 05:13:29 json_config -- json_config/common.sh@10 -- # shift 00:07:25.744 05:13:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:25.744 05:13:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:25.744 05:13:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:25.744 05:13:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:25.744 05:13:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:25.744 05:13:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61400 00:07:25.744 05:13:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:25.744 05:13:29 json_config -- json_config/common.sh@25 -- # waitforlisten 61400 /var/tmp/spdk_tgt.sock 00:07:25.744 05:13:29 json_config -- common/autotest_common.sh@831 -- # '[' -z 61400 ']' 00:07:25.744 05:13:29 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:25.744 05:13:29 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:25.744 05:13:29 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.744 05:13:29 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:25.744 05:13:29 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.744 05:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.744 [2024-07-25 05:13:29.773904] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:07:25.744 [2024-07-25 05:13:29.774034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61400 ] 00:07:26.002 [2024-07-25 05:13:30.089896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.002 [2024-07-25 05:13:30.140649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.937 00:07:26.937 05:13:30 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.937 05:13:30 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:26.937 05:13:30 json_config -- json_config/common.sh@26 -- # echo '' 00:07:26.937 05:13:30 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:07:26.937 05:13:30 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:07:26.937 05:13:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:26.937 05:13:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.937 05:13:30 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:07:26.937 05:13:30 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:07:26.937 05:13:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:26.937 05:13:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.937 05:13:30 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:26.937 05:13:30 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:07:26.937 05:13:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:27.194 05:13:31 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:07:27.194 05:13:31 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:27.194 05:13:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:27.194 05:13:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:27.194 05:13:31 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:27.194 05:13:31 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:27.194 05:13:31 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:27.194 05:13:31 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:27.194 05:13:31 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:27.194 05:13:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@51 -- # sort 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:07:27.760 05:13:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:27.760 05:13:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@59 -- # return 0 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:07:27.760 05:13:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:27.760 05:13:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:07:27.760 05:13:31 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:27.760 05:13:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:28.017 MallocForNvmf0 00:07:28.017 05:13:31 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:28.017 05:13:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:28.275 MallocForNvmf1 00:07:28.275 05:13:32 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:28.275 05:13:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:28.579 [2024-07-25 05:13:32.507509] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.579 05:13:32 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:28.579 05:13:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:28.837 05:13:32 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:28.838 05:13:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:29.096 05:13:33 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:29.096 05:13:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:29.367 05:13:33 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:29.367 05:13:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:29.624 [2024-07-25 05:13:33.640122] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:29.624 05:13:33 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:07:29.624 05:13:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:29.625 05:13:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:29.625 05:13:33 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:07:29.625 05:13:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:29.625 05:13:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:29.625 05:13:33 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:07:29.625 05:13:33 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:29.625 05:13:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:29.882 MallocBdevForConfigChangeCheck 00:07:29.882 05:13:34 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:07:29.882 05:13:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:29.882 05:13:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:29.882 05:13:34 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:07:29.882 05:13:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:30.448 INFO: shutting down applications... 00:07:30.448 05:13:34 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:07:30.448 05:13:34 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:07:30.448 05:13:34 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:07:30.448 05:13:34 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:07:30.448 05:13:34 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:30.706 Calling clear_iscsi_subsystem 00:07:30.706 Calling clear_nvmf_subsystem 00:07:30.706 Calling clear_nbd_subsystem 00:07:30.706 Calling clear_ublk_subsystem 00:07:30.706 Calling clear_vhost_blk_subsystem 00:07:30.706 Calling clear_vhost_scsi_subsystem 00:07:30.706 Calling clear_bdev_subsystem 00:07:30.706 05:13:34 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:30.706 05:13:34 json_config -- json_config/json_config.sh@347 -- # count=100 00:07:30.706 05:13:34 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:07:30.706 05:13:34 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:30.706 05:13:34 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:30.706 05:13:34 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:31.272 05:13:35 json_config -- json_config/json_config.sh@349 -- # break 00:07:31.272 05:13:35 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:07:31.272 05:13:35 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:07:31.272 05:13:35 json_config -- json_config/common.sh@31 -- # local app=target 00:07:31.272 05:13:35 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:31.272 05:13:35 json_config -- json_config/common.sh@35 -- # [[ -n 61400 ]] 00:07:31.272 05:13:35 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61400 00:07:31.272 05:13:35 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:31.272 05:13:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:31.272 05:13:35 json_config -- json_config/common.sh@41 -- # kill -0 61400 00:07:31.272 05:13:35 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:31.893 05:13:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:31.893 05:13:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:31.893 05:13:35 json_config -- json_config/common.sh@41 -- # kill -0 61400 00:07:31.893 05:13:35 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:31.893 05:13:35 json_config -- json_config/common.sh@43 -- # break 00:07:31.893 05:13:35 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:31.893 SPDK target shutdown done 00:07:31.893 05:13:35 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:31.893 INFO: relaunching applications... 00:07:31.893 05:13:35 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:07:31.893 05:13:35 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:31.893 05:13:35 json_config -- json_config/common.sh@9 -- # local app=target 00:07:31.893 05:13:35 json_config -- json_config/common.sh@10 -- # shift 00:07:31.893 05:13:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:31.893 05:13:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:31.893 05:13:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:31.893 05:13:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:31.893 05:13:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:31.893 05:13:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61686 00:07:31.893 05:13:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:31.893 Waiting for target to run... 00:07:31.893 05:13:35 json_config -- json_config/common.sh@25 -- # waitforlisten 61686 /var/tmp/spdk_tgt.sock 00:07:31.893 05:13:35 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:31.893 05:13:35 json_config -- common/autotest_common.sh@831 -- # '[' -z 61686 ']' 00:07:31.893 05:13:35 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:31.893 05:13:35 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:31.893 05:13:35 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:31.893 05:13:35 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.893 05:13:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:31.893 [2024-07-25 05:13:35.820661] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:07:31.893 [2024-07-25 05:13:35.820767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61686 ] 00:07:32.151 [2024-07-25 05:13:36.122110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.151 [2024-07-25 05:13:36.177235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.409 [2024-07-25 05:13:36.495346] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.409 [2024-07-25 05:13:36.527404] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:32.667 05:13:36 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.667 05:13:36 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:32.667 00:07:32.667 05:13:36 json_config -- json_config/common.sh@26 -- # echo '' 00:07:32.667 05:13:36 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:07:32.667 INFO: Checking if target configuration is the same... 00:07:32.667 05:13:36 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:32.667 05:13:36 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:32.667 05:13:36 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:07:32.667 05:13:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:32.926 + '[' 2 -ne 2 ']' 00:07:32.926 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:32.926 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:32.926 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:32.926 +++ basename /dev/fd/62 00:07:32.926 ++ mktemp /tmp/62.XXX 00:07:32.926 + tmp_file_1=/tmp/62.WyD 00:07:32.926 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:32.926 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:32.926 + tmp_file_2=/tmp/spdk_tgt_config.json.5RG 00:07:32.926 + ret=0 00:07:32.926 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:33.184 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:33.184 + diff -u /tmp/62.WyD /tmp/spdk_tgt_config.json.5RG 00:07:33.184 + echo 'INFO: JSON config files are the same' 00:07:33.184 INFO: JSON config files are the same 00:07:33.184 + rm /tmp/62.WyD /tmp/spdk_tgt_config.json.5RG 00:07:33.184 + exit 0 00:07:33.184 05:13:37 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:07:33.184 05:13:37 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:33.184 INFO: changing configuration and checking if this can be detected... 00:07:33.184 05:13:37 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:33.184 05:13:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:33.443 05:13:37 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:33.443 05:13:37 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:07:33.443 05:13:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:33.443 + '[' 2 -ne 2 ']' 00:07:33.443 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:33.443 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:33.443 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:33.443 +++ basename /dev/fd/62 00:07:33.443 ++ mktemp /tmp/62.XXX 00:07:33.443 + tmp_file_1=/tmp/62.BDF 00:07:33.443 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:33.443 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:33.443 + tmp_file_2=/tmp/spdk_tgt_config.json.XaX 00:07:33.443 + ret=0 00:07:33.443 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:34.009 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:34.009 + diff -u /tmp/62.BDF /tmp/spdk_tgt_config.json.XaX 00:07:34.009 + ret=1 00:07:34.009 + echo '=== Start of file: /tmp/62.BDF ===' 00:07:34.009 + cat /tmp/62.BDF 00:07:34.009 + echo '=== End of file: /tmp/62.BDF ===' 00:07:34.009 + echo '' 00:07:34.009 + echo '=== Start of file: /tmp/spdk_tgt_config.json.XaX ===' 00:07:34.009 + cat /tmp/spdk_tgt_config.json.XaX 00:07:34.009 + echo '=== End of file: /tmp/spdk_tgt_config.json.XaX ===' 00:07:34.009 + echo '' 00:07:34.009 + rm /tmp/62.BDF /tmp/spdk_tgt_config.json.XaX 00:07:34.009 + exit 1 00:07:34.009 INFO: configuration change detected. 00:07:34.009 05:13:38 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:07:34.009 05:13:38 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:07:34.009 05:13:38 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:07:34.009 05:13:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:34.009 05:13:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:34.009 05:13:38 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:07:34.009 05:13:38 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:07:34.009 05:13:38 json_config -- json_config/json_config.sh@321 -- # [[ -n 61686 ]] 00:07:34.009 05:13:38 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:07:34.009 05:13:38 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:07:34.009 05:13:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:34.009 05:13:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:34.009 05:13:38 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:07:34.009 05:13:38 json_config -- json_config/json_config.sh@197 -- # uname -s 00:07:34.009 05:13:38 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:07:34.009 05:13:38 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:07:34.009 05:13:38 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:07:34.009 05:13:38 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:07:34.009 05:13:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:34.009 05:13:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:34.009 05:13:38 json_config -- json_config/json_config.sh@327 -- # killprocess 61686 00:07:34.009 05:13:38 json_config -- common/autotest_common.sh@950 -- # '[' -z 61686 ']' 00:07:34.009 05:13:38 json_config -- common/autotest_common.sh@954 -- # kill -0 61686 00:07:34.009 05:13:38 json_config -- common/autotest_common.sh@955 -- # uname 00:07:34.009 05:13:38 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.009 05:13:38 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61686 00:07:34.009 05:13:38 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.009 05:13:38 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.009 killing process with pid 61686 00:07:34.009 05:13:38 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61686' 00:07:34.009 05:13:38 json_config -- common/autotest_common.sh@969 -- # kill 61686 00:07:34.009 05:13:38 json_config -- common/autotest_common.sh@974 -- # wait 61686 00:07:34.267 05:13:38 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:34.267 05:13:38 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:07:34.267 05:13:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:34.267 05:13:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:34.267 05:13:38 json_config -- json_config/json_config.sh@332 -- # return 0 00:07:34.267 INFO: Success 00:07:34.268 05:13:38 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:07:34.268 00:07:34.268 real 0m8.713s 00:07:34.268 user 0m13.036s 00:07:34.268 sys 0m1.524s 00:07:34.268 05:13:38 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.268 ************************************ 00:07:34.268 END TEST json_config 00:07:34.268 ************************************ 00:07:34.268 05:13:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:34.268 05:13:38 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:34.268 05:13:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.268 05:13:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.268 05:13:38 -- common/autotest_common.sh@10 -- # set +x 00:07:34.268 ************************************ 00:07:34.268 START TEST json_config_extra_key 00:07:34.268 ************************************ 00:07:34.268 05:13:38 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:34.268 05:13:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:34.268 05:13:38 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.268 05:13:38 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.268 05:13:38 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.268 05:13:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.268 05:13:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.268 05:13:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.268 05:13:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:34.268 05:13:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:34.268 05:13:38 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:34.268 05:13:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:34.268 05:13:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:34.268 05:13:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:34.268 05:13:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:34.268 05:13:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:34.268 05:13:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:34.268 05:13:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:34.268 05:13:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:34.268 05:13:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:34.268 05:13:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:34.268 INFO: launching applications... 00:07:34.268 05:13:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:34.268 05:13:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:34.268 05:13:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:34.268 05:13:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:34.268 05:13:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:34.268 05:13:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:34.268 05:13:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:34.268 05:13:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:34.268 05:13:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:34.268 05:13:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61863 00:07:34.268 Waiting for target to run... 00:07:34.268 05:13:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:34.268 05:13:38 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:34.268 05:13:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61863 /var/tmp/spdk_tgt.sock 00:07:34.526 05:13:38 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 61863 ']' 00:07:34.526 05:13:38 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:34.526 05:13:38 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:34.526 05:13:38 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:34.526 05:13:38 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.526 05:13:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:34.526 [2024-07-25 05:13:38.506683] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:07:34.526 [2024-07-25 05:13:38.506775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61863 ] 00:07:34.832 [2024-07-25 05:13:38.791858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.832 [2024-07-25 05:13:38.836818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.398 05:13:39 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.398 05:13:39 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:35.398 00:07:35.398 05:13:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:35.398 INFO: shutting down applications... 00:07:35.398 05:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:35.398 05:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:35.398 05:13:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:35.398 05:13:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:35.398 05:13:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61863 ]] 00:07:35.398 05:13:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61863 00:07:35.398 05:13:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:35.398 05:13:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:35.398 05:13:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61863 00:07:35.398 05:13:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:35.964 05:13:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:35.964 05:13:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:35.964 05:13:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61863 00:07:35.964 05:13:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:35.964 05:13:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:35.964 05:13:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:35.964 SPDK target shutdown done 00:07:35.964 05:13:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:35.964 Success 00:07:35.964 05:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:35.964 00:07:35.964 real 0m1.618s 00:07:35.964 user 0m1.493s 00:07:35.964 sys 0m0.294s 00:07:35.964 05:13:39 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.964 05:13:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:35.964 ************************************ 00:07:35.964 END TEST json_config_extra_key 00:07:35.964 ************************************ 00:07:35.964 05:13:40 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:35.964 05:13:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.964 05:13:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.964 05:13:40 -- common/autotest_common.sh@10 -- # set +x 00:07:35.964 ************************************ 00:07:35.964 START TEST alias_rpc 00:07:35.964 ************************************ 00:07:35.964 05:13:40 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:35.964 * Looking for test storage... 00:07:35.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:35.964 05:13:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:35.964 05:13:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61940 00:07:35.964 05:13:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:35.964 05:13:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61940 00:07:35.964 05:13:40 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 61940 ']' 00:07:35.964 05:13:40 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.964 05:13:40 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.964 05:13:40 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.964 05:13:40 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.964 05:13:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.223 [2024-07-25 05:13:40.182942] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:07:36.223 [2024-07-25 05:13:40.183759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61940 ] 00:07:36.223 [2024-07-25 05:13:40.332624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.481 [2024-07-25 05:13:40.414116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.047 05:13:41 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.047 05:13:41 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:37.047 05:13:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:37.615 05:13:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61940 00:07:37.615 05:13:41 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 61940 ']' 00:07:37.615 05:13:41 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 61940 00:07:37.615 05:13:41 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:37.615 05:13:41 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.615 05:13:41 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61940 00:07:37.615 05:13:41 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.615 05:13:41 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.615 05:13:41 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61940' 00:07:37.615 killing process with pid 61940 00:07:37.615 05:13:41 alias_rpc -- common/autotest_common.sh@969 -- # kill 61940 00:07:37.615 05:13:41 alias_rpc -- common/autotest_common.sh@974 -- # wait 61940 00:07:37.874 00:07:37.874 real 0m1.759s 00:07:37.874 user 0m2.197s 00:07:37.874 sys 0m0.331s 00:07:37.874 05:13:41 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.874 ************************************ 00:07:37.874 END TEST alias_rpc 00:07:37.874 ************************************ 00:07:37.874 05:13:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.874 05:13:41 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:07:37.874 05:13:41 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:37.874 05:13:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.874 05:13:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.874 05:13:41 -- common/autotest_common.sh@10 -- # set +x 00:07:37.874 ************************************ 00:07:37.874 START TEST dpdk_mem_utility 00:07:37.874 ************************************ 00:07:37.874 05:13:41 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:37.874 * Looking for test storage... 00:07:37.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:37.874 05:13:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:37.874 05:13:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=62027 00:07:37.874 05:13:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 62027 00:07:37.874 05:13:41 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 62027 ']' 00:07:37.874 05:13:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.874 05:13:41 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.874 05:13:41 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.874 05:13:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.874 05:13:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:37.874 05:13:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:37.874 [2024-07-25 05:13:41.999128] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:07:37.874 [2024-07-25 05:13:41.999248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62027 ] 00:07:38.132 [2024-07-25 05:13:42.153310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.132 [2024-07-25 05:13:42.224570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.068 05:13:42 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.068 05:13:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:39.068 05:13:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:39.068 05:13:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:39.068 05:13:42 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.068 05:13:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:39.068 { 00:07:39.068 "filename": "/tmp/spdk_mem_dump.txt" 00:07:39.068 } 00:07:39.068 05:13:43 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.068 05:13:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:39.068 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:39.068 1 heaps totaling size 814.000000 MiB 00:07:39.068 size: 814.000000 MiB heap id: 0 00:07:39.068 end heaps---------- 00:07:39.068 8 mempools totaling size 598.116089 MiB 00:07:39.068 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:39.068 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:39.068 size: 84.521057 MiB name: bdev_io_62027 00:07:39.068 size: 51.011292 MiB name: evtpool_62027 00:07:39.068 size: 50.003479 MiB name: msgpool_62027 00:07:39.068 size: 21.763794 MiB name: PDU_Pool 00:07:39.068 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:39.068 size: 0.026123 MiB name: Session_Pool 00:07:39.068 end mempools------- 00:07:39.068 6 memzones totaling size 4.142822 MiB 00:07:39.068 size: 1.000366 MiB name: RG_ring_0_62027 00:07:39.068 size: 1.000366 MiB name: RG_ring_1_62027 00:07:39.068 size: 1.000366 MiB name: RG_ring_4_62027 00:07:39.068 size: 1.000366 MiB name: RG_ring_5_62027 00:07:39.068 size: 0.125366 MiB name: RG_ring_2_62027 00:07:39.068 size: 0.015991 MiB name: RG_ring_3_62027 00:07:39.068 end memzones------- 00:07:39.068 05:13:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:39.068 heap id: 0 total size: 814.000000 MiB number of busy elements: 208 number of free elements: 15 00:07:39.068 list of free elements. size: 12.488770 MiB 00:07:39.068 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:39.068 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:39.068 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:39.068 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:39.068 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:39.068 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:39.068 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:39.068 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:39.068 element at address: 0x200000200000 with size: 0.837036 MiB 00:07:39.068 element at address: 0x20001aa00000 with size: 0.572815 MiB 00:07:39.068 element at address: 0x20000b200000 with size: 0.489990 MiB 00:07:39.068 element at address: 0x200000800000 with size: 0.487061 MiB 00:07:39.068 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:39.068 element at address: 0x200027e00000 with size: 0.399414 MiB 00:07:39.068 element at address: 0x200003a00000 with size: 0.351685 MiB 00:07:39.068 list of standard malloc elements. size: 199.248657 MiB 00:07:39.068 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:39.068 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:39.068 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:39.068 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:39.068 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:39.068 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:39.068 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:39.068 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:39.068 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:39.068 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:39.068 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:07:39.068 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:07:39.068 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:07:39.068 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:07:39.068 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:39.068 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:39.068 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:39.068 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:39.069 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:39.069 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:39.069 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:39.069 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e66400 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e664c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6d0c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:39.069 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:39.069 list of memzone associated elements. size: 602.262573 MiB 00:07:39.069 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:39.069 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:39.069 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:39.069 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:39.069 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:39.069 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_62027_0 00:07:39.069 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:39.069 associated memzone info: size: 48.002930 MiB name: MP_evtpool_62027_0 00:07:39.069 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:39.069 associated memzone info: size: 48.002930 MiB name: MP_msgpool_62027_0 00:07:39.069 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:39.069 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:39.069 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:39.069 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:39.069 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:39.070 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_62027 00:07:39.070 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:39.070 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_62027 00:07:39.070 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:39.070 associated memzone info: size: 1.007996 MiB name: MP_evtpool_62027 00:07:39.070 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:39.070 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:39.070 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:39.070 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:39.070 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:39.070 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:39.070 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:39.070 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:39.070 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:39.070 associated memzone info: size: 1.000366 MiB name: RG_ring_0_62027 00:07:39.070 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:39.070 associated memzone info: size: 1.000366 MiB name: RG_ring_1_62027 00:07:39.070 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:39.070 associated memzone info: size: 1.000366 MiB name: RG_ring_4_62027 00:07:39.070 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:39.070 associated memzone info: size: 1.000366 MiB name: RG_ring_5_62027 00:07:39.070 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:39.070 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_62027 00:07:39.070 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:39.070 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:39.070 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:39.070 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:39.070 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:39.070 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:39.070 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:39.070 associated memzone info: size: 0.125366 MiB name: RG_ring_2_62027 00:07:39.070 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:39.070 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:39.070 element at address: 0x200027e66580 with size: 0.023743 MiB 00:07:39.070 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:39.070 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:39.070 associated memzone info: size: 0.015991 MiB name: RG_ring_3_62027 00:07:39.070 element at address: 0x200027e6c6c0 with size: 0.002441 MiB 00:07:39.070 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:39.070 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:07:39.070 associated memzone info: size: 0.000183 MiB name: MP_msgpool_62027 00:07:39.070 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:39.070 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_62027 00:07:39.070 element at address: 0x200027e6d180 with size: 0.000305 MiB 00:07:39.070 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:39.070 05:13:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:39.070 05:13:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 62027 00:07:39.070 05:13:43 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 62027 ']' 00:07:39.070 05:13:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 62027 00:07:39.070 05:13:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:39.070 05:13:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.070 05:13:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62027 00:07:39.070 05:13:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.070 05:13:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.070 killing process with pid 62027 00:07:39.070 05:13:43 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62027' 00:07:39.070 05:13:43 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 62027 00:07:39.070 05:13:43 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 62027 00:07:39.329 00:07:39.329 real 0m1.566s 00:07:39.329 user 0m1.836s 00:07:39.329 sys 0m0.333s 00:07:39.329 05:13:43 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.329 ************************************ 00:07:39.329 05:13:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:39.329 END TEST dpdk_mem_utility 00:07:39.329 ************************************ 00:07:39.329 05:13:43 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:39.329 05:13:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.329 05:13:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.329 05:13:43 -- common/autotest_common.sh@10 -- # set +x 00:07:39.329 ************************************ 00:07:39.329 START TEST event 00:07:39.329 ************************************ 00:07:39.329 05:13:43 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:39.587 * Looking for test storage... 00:07:39.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:39.588 05:13:43 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:39.588 05:13:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:39.588 05:13:43 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:39.588 05:13:43 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:39.588 05:13:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.588 05:13:43 event -- common/autotest_common.sh@10 -- # set +x 00:07:39.588 ************************************ 00:07:39.588 START TEST event_perf 00:07:39.588 ************************************ 00:07:39.588 05:13:43 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:39.588 Running I/O for 1 seconds...[2024-07-25 05:13:43.548604] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:07:39.588 [2024-07-25 05:13:43.548734] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62117 ] 00:07:39.588 [2024-07-25 05:13:43.692018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.846 [2024-07-25 05:13:43.766063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.846 [2024-07-25 05:13:43.766129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.846 [2024-07-25 05:13:43.766266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.846 [2024-07-25 05:13:43.766274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.781 Running I/O for 1 seconds... 00:07:40.781 lcore 0: 190928 00:07:40.781 lcore 1: 190926 00:07:40.781 lcore 2: 190926 00:07:40.781 lcore 3: 190928 00:07:40.781 done. 00:07:40.781 00:07:40.781 real 0m1.314s 00:07:40.781 user 0m4.140s 00:07:40.781 sys 0m0.050s 00:07:40.781 05:13:44 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.781 05:13:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:40.781 ************************************ 00:07:40.781 END TEST event_perf 00:07:40.781 ************************************ 00:07:40.781 05:13:44 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:40.781 05:13:44 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:40.781 05:13:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.781 05:13:44 event -- common/autotest_common.sh@10 -- # set +x 00:07:40.781 ************************************ 00:07:40.781 START TEST event_reactor 00:07:40.781 ************************************ 00:07:40.781 05:13:44 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:40.781 [2024-07-25 05:13:44.898309] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:07:40.781 [2024-07-25 05:13:44.898407] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62155 ] 00:07:41.038 [2024-07-25 05:13:45.033209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.038 [2024-07-25 05:13:45.102570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.025 test_start 00:07:42.025 oneshot 00:07:42.025 tick 100 00:07:42.025 tick 100 00:07:42.025 tick 250 00:07:42.025 tick 100 00:07:42.025 tick 100 00:07:42.025 tick 100 00:07:42.025 tick 250 00:07:42.025 tick 500 00:07:42.025 tick 100 00:07:42.025 tick 100 00:07:42.025 tick 250 00:07:42.025 tick 100 00:07:42.025 tick 100 00:07:42.025 test_end 00:07:42.025 00:07:42.025 real 0m1.291s 00:07:42.025 user 0m1.142s 00:07:42.025 sys 0m0.042s 00:07:42.025 05:13:46 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.025 ************************************ 00:07:42.025 END TEST event_reactor 00:07:42.025 ************************************ 00:07:42.025 05:13:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:42.283 05:13:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:42.283 05:13:46 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:42.283 05:13:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.283 05:13:46 event -- common/autotest_common.sh@10 -- # set +x 00:07:42.283 ************************************ 00:07:42.283 START TEST event_reactor_perf 00:07:42.283 ************************************ 00:07:42.283 05:13:46 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:42.283 [2024-07-25 05:13:46.238575] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:07:42.283 [2024-07-25 05:13:46.238659] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62191 ] 00:07:42.283 [2024-07-25 05:13:46.372686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.283 [2024-07-25 05:13:46.432151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.656 test_start 00:07:43.656 test_end 00:07:43.657 Performance: 364550 events per second 00:07:43.657 00:07:43.657 real 0m1.281s 00:07:43.657 user 0m1.134s 00:07:43.657 sys 0m0.041s 00:07:43.657 05:13:47 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.657 ************************************ 00:07:43.657 END TEST event_reactor_perf 00:07:43.657 ************************************ 00:07:43.657 05:13:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:43.657 05:13:47 event -- event/event.sh@49 -- # uname -s 00:07:43.657 05:13:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:43.657 05:13:47 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:43.657 05:13:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.657 05:13:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.657 05:13:47 event -- common/autotest_common.sh@10 -- # set +x 00:07:43.657 ************************************ 00:07:43.657 START TEST event_scheduler 00:07:43.657 ************************************ 00:07:43.657 05:13:47 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:43.657 * Looking for test storage... 00:07:43.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:43.657 05:13:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:43.657 05:13:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62247 00:07:43.657 05:13:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:43.657 05:13:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:43.657 05:13:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62247 00:07:43.657 05:13:47 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 62247 ']' 00:07:43.657 05:13:47 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.657 05:13:47 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.657 05:13:47 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.657 05:13:47 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.657 05:13:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:43.657 [2024-07-25 05:13:47.683902] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:07:43.657 [2024-07-25 05:13:47.684070] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62247 ] 00:07:43.915 [2024-07-25 05:13:47.853301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.915 [2024-07-25 05:13:47.935437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.915 [2024-07-25 05:13:47.935541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.915 [2024-07-25 05:13:47.935625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.915 [2024-07-25 05:13:47.935630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.529 05:13:48 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.529 05:13:48 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:44.529 05:13:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:44.529 05:13:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.529 05:13:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:44.529 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:44.529 POWER: Cannot set governor of lcore 0 to userspace 00:07:44.529 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:44.529 POWER: Cannot set governor of lcore 0 to performance 00:07:44.529 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:44.529 POWER: Cannot set governor of lcore 0 to userspace 00:07:44.529 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:44.529 POWER: Cannot set governor of lcore 0 to userspace 00:07:44.529 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:44.529 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:44.529 POWER: Unable to set Power Management Environment for lcore 0 00:07:44.529 [2024-07-25 05:13:48.677358] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:44.529 [2024-07-25 05:13:48.677382] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:44.529 [2024-07-25 05:13:48.677402] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:44.529 [2024-07-25 05:13:48.677422] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:44.529 [2024-07-25 05:13:48.677435] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:44.529 [2024-07-25 05:13:48.677447] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:44.529 05:13:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.529 05:13:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:44.529 05:13:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.529 05:13:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:44.789 [2024-07-25 05:13:48.733580] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:44.789 05:13:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.789 05:13:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:44.789 05:13:48 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.789 05:13:48 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.789 05:13:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:44.789 ************************************ 00:07:44.789 START TEST scheduler_create_thread 00:07:44.789 ************************************ 00:07:44.789 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:44.789 05:13:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:44.789 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.789 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.789 2 00:07:44.789 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.789 05:13:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:44.789 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.789 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.789 3 00:07:44.789 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.789 05:13:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:44.789 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.790 4 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.790 5 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.790 6 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.790 7 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.790 8 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.790 9 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.790 10 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.790 05:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:46.168 05:13:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.168 00:07:46.168 real 0m1.173s 00:07:46.168 user 0m0.018s 00:07:46.168 sys 0m0.001s 00:07:46.168 05:13:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.168 05:13:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:46.168 ************************************ 00:07:46.168 END TEST scheduler_create_thread 00:07:46.168 ************************************ 00:07:46.168 05:13:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:46.168 05:13:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62247 00:07:46.168 05:13:49 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 62247 ']' 00:07:46.168 05:13:49 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 62247 00:07:46.168 05:13:49 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:46.168 05:13:49 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.168 05:13:49 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62247 00:07:46.168 05:13:49 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:46.168 05:13:49 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:46.168 killing process with pid 62247 00:07:46.168 05:13:49 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62247' 00:07:46.168 05:13:49 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 62247 00:07:46.168 05:13:49 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 62247 00:07:46.426 [2024-07-25 05:13:50.395828] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:46.427 00:07:46.427 real 0m3.026s 00:07:46.427 user 0m5.564s 00:07:46.427 sys 0m0.300s 00:07:46.427 05:13:50 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.427 05:13:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:46.427 ************************************ 00:07:46.427 END TEST event_scheduler 00:07:46.427 ************************************ 00:07:46.686 05:13:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:46.686 05:13:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:46.686 05:13:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.686 05:13:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.686 05:13:50 event -- common/autotest_common.sh@10 -- # set +x 00:07:46.686 ************************************ 00:07:46.686 START TEST app_repeat 00:07:46.686 ************************************ 00:07:46.686 05:13:50 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:46.686 05:13:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.686 05:13:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:46.686 05:13:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:46.686 05:13:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:46.686 05:13:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:46.686 05:13:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:46.686 05:13:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:46.686 05:13:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62348 00:07:46.686 05:13:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:46.686 Process app_repeat pid: 62348 00:07:46.686 05:13:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62348' 00:07:46.686 05:13:50 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:46.686 05:13:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:46.686 spdk_app_start Round 0 00:07:46.686 05:13:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:46.686 05:13:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62348 /var/tmp/spdk-nbd.sock 00:07:46.686 05:13:50 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 62348 ']' 00:07:46.686 05:13:50 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:46.686 05:13:50 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:46.686 05:13:50 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:46.686 05:13:50 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.686 05:13:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:46.686 [2024-07-25 05:13:50.651925] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:07:46.686 [2024-07-25 05:13:50.652020] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62348 ] 00:07:46.686 [2024-07-25 05:13:50.787523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:46.686 [2024-07-25 05:13:50.856781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.686 [2024-07-25 05:13:50.856797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.945 05:13:50 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.945 05:13:50 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:46.945 05:13:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:47.204 Malloc0 00:07:47.204 05:13:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:47.463 Malloc1 00:07:47.463 05:13:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:47.463 05:13:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.463 05:13:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:47.463 05:13:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:47.463 05:13:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:47.463 05:13:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:47.463 05:13:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:47.463 05:13:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.463 05:13:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:47.463 05:13:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:47.463 05:13:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:47.463 05:13:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:47.463 05:13:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:47.463 05:13:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:47.463 05:13:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:47.463 05:13:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:47.721 /dev/nbd0 00:07:47.721 05:13:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:47.721 05:13:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:47.721 05:13:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:47.721 05:13:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:47.722 05:13:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:47.722 05:13:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:47.722 05:13:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:47.722 05:13:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:47.722 05:13:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.722 05:13:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.722 05:13:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:47.722 1+0 records in 00:07:47.722 1+0 records out 00:07:47.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284018 s, 14.4 MB/s 00:07:47.722 05:13:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:47.722 05:13:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:47.722 05:13:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:47.722 05:13:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.722 05:13:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:47.722 05:13:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.722 05:13:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:47.722 05:13:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:47.980 /dev/nbd1 00:07:48.239 05:13:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:48.239 05:13:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:48.239 05:13:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:48.239 05:13:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:48.239 05:13:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.239 05:13:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.239 05:13:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:48.239 05:13:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:48.239 05:13:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.239 05:13:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.239 05:13:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:48.239 1+0 records in 00:07:48.239 1+0 records out 00:07:48.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032158 s, 12.7 MB/s 00:07:48.239 05:13:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:48.239 05:13:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:48.239 05:13:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:48.239 05:13:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.239 05:13:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:48.239 05:13:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.239 05:13:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:48.239 05:13:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:48.239 05:13:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.239 05:13:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:48.498 { 00:07:48.498 "bdev_name": "Malloc0", 00:07:48.498 "nbd_device": "/dev/nbd0" 00:07:48.498 }, 00:07:48.498 { 00:07:48.498 "bdev_name": "Malloc1", 00:07:48.498 "nbd_device": "/dev/nbd1" 00:07:48.498 } 00:07:48.498 ]' 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:48.498 { 00:07:48.498 "bdev_name": "Malloc0", 00:07:48.498 "nbd_device": "/dev/nbd0" 00:07:48.498 }, 00:07:48.498 { 00:07:48.498 "bdev_name": "Malloc1", 00:07:48.498 "nbd_device": "/dev/nbd1" 00:07:48.498 } 00:07:48.498 ]' 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:48.498 /dev/nbd1' 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:48.498 /dev/nbd1' 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:48.498 256+0 records in 00:07:48.498 256+0 records out 00:07:48.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00816273 s, 128 MB/s 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:48.498 256+0 records in 00:07:48.498 256+0 records out 00:07:48.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260993 s, 40.2 MB/s 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:48.498 256+0 records in 00:07:48.498 256+0 records out 00:07:48.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296737 s, 35.3 MB/s 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.498 05:13:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:48.757 05:13:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:48.757 05:13:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:48.757 05:13:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:48.757 05:13:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.757 05:13:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.757 05:13:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:48.757 05:13:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:48.757 05:13:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.757 05:13:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.757 05:13:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:49.016 05:13:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:49.016 05:13:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:49.016 05:13:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:49.016 05:13:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.016 05:13:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.016 05:13:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:49.016 05:13:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:49.016 05:13:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.016 05:13:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:49.016 05:13:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.016 05:13:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:49.279 05:13:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:49.279 05:13:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:49.279 05:13:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:49.540 05:13:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:49.540 05:13:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:49.540 05:13:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:49.540 05:13:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:49.540 05:13:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:49.540 05:13:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:49.540 05:13:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:49.540 05:13:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:49.540 05:13:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:49.540 05:13:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:49.799 05:13:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:49.799 [2024-07-25 05:13:53.896525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:49.799 [2024-07-25 05:13:53.954870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.799 [2024-07-25 05:13:53.954880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.057 [2024-07-25 05:13:53.985184] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:50.057 [2024-07-25 05:13:53.985233] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:53.342 spdk_app_start Round 1 00:07:53.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:53.342 05:13:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:53.342 05:13:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:53.342 05:13:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62348 /var/tmp/spdk-nbd.sock 00:07:53.342 05:13:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 62348 ']' 00:07:53.342 05:13:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:53.342 05:13:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.342 05:13:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:53.342 05:13:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.342 05:13:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:53.342 05:13:57 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.342 05:13:57 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:53.342 05:13:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:53.342 Malloc0 00:07:53.342 05:13:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:53.599 Malloc1 00:07:53.599 05:13:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:53.599 05:13:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.599 05:13:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:53.599 05:13:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:53.599 05:13:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.599 05:13:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:53.599 05:13:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:53.599 05:13:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.599 05:13:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:53.599 05:13:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:53.599 05:13:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.599 05:13:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:53.599 05:13:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:53.599 05:13:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:53.599 05:13:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:53.599 05:13:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:53.895 /dev/nbd0 00:07:53.895 05:13:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:53.895 05:13:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:53.895 05:13:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:53.896 05:13:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:53.896 05:13:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:53.896 05:13:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:53.896 05:13:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:53.896 05:13:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:53.896 05:13:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:53.896 05:13:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:53.896 05:13:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:53.896 1+0 records in 00:07:53.896 1+0 records out 00:07:53.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401979 s, 10.2 MB/s 00:07:53.896 05:13:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:53.896 05:13:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:53.896 05:13:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:53.896 05:13:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:53.896 05:13:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:53.896 05:13:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.896 05:13:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:53.896 05:13:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:54.170 /dev/nbd1 00:07:54.170 05:13:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:54.170 05:13:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:54.170 05:13:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:54.170 05:13:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:54.170 05:13:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:54.170 05:13:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:54.170 05:13:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:54.170 05:13:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:54.170 05:13:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:54.170 05:13:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:54.170 05:13:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:54.170 1+0 records in 00:07:54.170 1+0 records out 00:07:54.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258332 s, 15.9 MB/s 00:07:54.170 05:13:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:54.170 05:13:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:54.170 05:13:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:54.170 05:13:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:54.170 05:13:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:54.170 05:13:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:54.170 05:13:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:54.170 05:13:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:54.170 05:13:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.170 05:13:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:54.427 05:13:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:54.427 { 00:07:54.427 "bdev_name": "Malloc0", 00:07:54.427 "nbd_device": "/dev/nbd0" 00:07:54.427 }, 00:07:54.427 { 00:07:54.427 "bdev_name": "Malloc1", 00:07:54.427 "nbd_device": "/dev/nbd1" 00:07:54.427 } 00:07:54.427 ]' 00:07:54.427 05:13:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:54.427 { 00:07:54.427 "bdev_name": "Malloc0", 00:07:54.427 "nbd_device": "/dev/nbd0" 00:07:54.427 }, 00:07:54.427 { 00:07:54.427 "bdev_name": "Malloc1", 00:07:54.427 "nbd_device": "/dev/nbd1" 00:07:54.427 } 00:07:54.427 ]' 00:07:54.427 05:13:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:54.685 /dev/nbd1' 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:54.685 /dev/nbd1' 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:54.685 256+0 records in 00:07:54.685 256+0 records out 00:07:54.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00748946 s, 140 MB/s 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:54.685 256+0 records in 00:07:54.685 256+0 records out 00:07:54.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260454 s, 40.3 MB/s 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:54.685 256+0 records in 00:07:54.685 256+0 records out 00:07:54.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292133 s, 35.9 MB/s 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.685 05:13:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:54.943 05:13:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:54.943 05:13:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:54.943 05:13:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:54.943 05:13:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.943 05:13:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.943 05:13:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:54.943 05:13:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:54.943 05:13:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.943 05:13:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.943 05:13:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:55.509 05:13:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:55.509 05:13:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:55.509 05:13:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:55.509 05:13:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.509 05:13:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.509 05:13:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:55.509 05:13:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:55.509 05:13:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.509 05:13:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:55.509 05:13:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.509 05:13:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:55.509 05:13:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:55.509 05:13:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:55.509 05:13:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:55.767 05:13:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:55.767 05:13:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:55.767 05:13:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:55.767 05:13:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:55.767 05:13:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:55.767 05:13:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:55.767 05:13:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:55.767 05:13:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:55.767 05:13:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:55.767 05:13:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:56.025 05:14:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:56.025 [2024-07-25 05:14:00.200488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:56.283 [2024-07-25 05:14:00.276040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.283 [2024-07-25 05:14:00.276056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.283 [2024-07-25 05:14:00.311506] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:56.283 [2024-07-25 05:14:00.311612] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:59.565 05:14:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:59.565 spdk_app_start Round 2 00:07:59.565 05:14:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:59.565 05:14:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62348 /var/tmp/spdk-nbd.sock 00:07:59.565 05:14:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 62348 ']' 00:07:59.565 05:14:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:59.565 05:14:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:59.565 05:14:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:59.565 05:14:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.565 05:14:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:59.565 05:14:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.565 05:14:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:59.565 05:14:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:59.565 Malloc0 00:07:59.565 05:14:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:00.131 Malloc1 00:08:00.131 05:14:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:00.131 05:14:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.131 05:14:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:00.131 05:14:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:00.131 05:14:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.131 05:14:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:00.131 05:14:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:00.131 05:14:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.131 05:14:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:00.131 05:14:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:00.131 05:14:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.131 05:14:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:00.131 05:14:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:00.131 05:14:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:00.131 05:14:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:00.131 05:14:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:00.131 /dev/nbd0 00:08:00.131 05:14:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:00.131 05:14:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:00.131 05:14:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:00.131 05:14:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:00.131 05:14:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:00.131 05:14:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:00.131 05:14:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:00.131 05:14:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:00.131 05:14:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:00.131 05:14:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:00.131 05:14:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:00.132 1+0 records in 00:08:00.132 1+0 records out 00:08:00.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247584 s, 16.5 MB/s 00:08:00.132 05:14:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:00.132 05:14:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:00.132 05:14:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:00.132 05:14:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:00.132 05:14:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:00.132 05:14:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:00.132 05:14:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:00.132 05:14:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:00.390 /dev/nbd1 00:08:00.649 05:14:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:00.649 05:14:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:00.649 05:14:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:00.649 05:14:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:00.649 05:14:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:00.649 05:14:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:00.649 05:14:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:00.649 05:14:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:00.649 05:14:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:00.649 05:14:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:00.649 05:14:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:00.649 1+0 records in 00:08:00.649 1+0 records out 00:08:00.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421343 s, 9.7 MB/s 00:08:00.649 05:14:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:00.649 05:14:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:00.649 05:14:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:00.649 05:14:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:00.649 05:14:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:00.649 05:14:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:00.649 05:14:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:00.649 05:14:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:00.649 05:14:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.649 05:14:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:00.908 { 00:08:00.908 "bdev_name": "Malloc0", 00:08:00.908 "nbd_device": "/dev/nbd0" 00:08:00.908 }, 00:08:00.908 { 00:08:00.908 "bdev_name": "Malloc1", 00:08:00.908 "nbd_device": "/dev/nbd1" 00:08:00.908 } 00:08:00.908 ]' 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:00.908 { 00:08:00.908 "bdev_name": "Malloc0", 00:08:00.908 "nbd_device": "/dev/nbd0" 00:08:00.908 }, 00:08:00.908 { 00:08:00.908 "bdev_name": "Malloc1", 00:08:00.908 "nbd_device": "/dev/nbd1" 00:08:00.908 } 00:08:00.908 ]' 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:00.908 /dev/nbd1' 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:00.908 /dev/nbd1' 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:00.908 256+0 records in 00:08:00.908 256+0 records out 00:08:00.908 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514059 s, 204 MB/s 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.908 05:14:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:00.908 256+0 records in 00:08:00.908 256+0 records out 00:08:00.908 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268856 s, 39.0 MB/s 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:00.908 256+0 records in 00:08:00.908 256+0 records out 00:08:00.908 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277042 s, 37.8 MB/s 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.908 05:14:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:01.167 05:14:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:01.167 05:14:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:01.167 05:14:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:01.167 05:14:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.167 05:14:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.167 05:14:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:01.167 05:14:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:01.167 05:14:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.167 05:14:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.167 05:14:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:01.426 05:14:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:01.426 05:14:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:01.426 05:14:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:01.426 05:14:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.426 05:14:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.426 05:14:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:01.426 05:14:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:01.426 05:14:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.426 05:14:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:01.426 05:14:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.426 05:14:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:01.684 05:14:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:01.943 05:14:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:01.943 05:14:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:01.943 05:14:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:01.943 05:14:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:01.943 05:14:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:01.943 05:14:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:01.943 05:14:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:01.943 05:14:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:01.943 05:14:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:01.943 05:14:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:01.943 05:14:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:01.943 05:14:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:02.202 05:14:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:02.202 [2024-07-25 05:14:06.335368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:02.461 [2024-07-25 05:14:06.394468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.461 [2024-07-25 05:14:06.394478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.461 [2024-07-25 05:14:06.424445] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:02.461 [2024-07-25 05:14:06.424506] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:05.749 05:14:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62348 /var/tmp/spdk-nbd.sock 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 62348 ']' 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:05.749 05:14:09 event.app_repeat -- event/event.sh@39 -- # killprocess 62348 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 62348 ']' 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 62348 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62348 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:05.749 killing process with pid 62348 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62348' 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@969 -- # kill 62348 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@974 -- # wait 62348 00:08:05.749 spdk_app_start is called in Round 0. 00:08:05.749 Shutdown signal received, stop current app iteration 00:08:05.749 Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 reinitialization... 00:08:05.749 spdk_app_start is called in Round 1. 00:08:05.749 Shutdown signal received, stop current app iteration 00:08:05.749 Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 reinitialization... 00:08:05.749 spdk_app_start is called in Round 2. 00:08:05.749 Shutdown signal received, stop current app iteration 00:08:05.749 Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 reinitialization... 00:08:05.749 spdk_app_start is called in Round 3. 00:08:05.749 Shutdown signal received, stop current app iteration 00:08:05.749 05:14:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:05.749 05:14:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:05.749 00:08:05.749 real 0m19.077s 00:08:05.749 user 0m43.555s 00:08:05.749 sys 0m2.847s 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.749 05:14:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:05.749 ************************************ 00:08:05.749 END TEST app_repeat 00:08:05.749 ************************************ 00:08:05.749 05:14:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:05.749 05:14:09 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:05.749 05:14:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.749 05:14:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.749 05:14:09 event -- common/autotest_common.sh@10 -- # set +x 00:08:05.749 ************************************ 00:08:05.749 START TEST cpu_locks 00:08:05.749 ************************************ 00:08:05.749 05:14:09 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:05.749 * Looking for test storage... 00:08:05.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:05.749 05:14:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:05.749 05:14:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:05.749 05:14:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:05.749 05:14:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:05.749 05:14:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.749 05:14:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.749 05:14:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:05.749 ************************************ 00:08:05.749 START TEST default_locks 00:08:05.749 ************************************ 00:08:05.749 05:14:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:08:05.749 05:14:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62970 00:08:05.749 05:14:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:05.749 05:14:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62970 00:08:05.749 05:14:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 62970 ']' 00:08:05.749 05:14:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.749 05:14:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.749 05:14:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.749 05:14:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.749 05:14:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:05.749 [2024-07-25 05:14:09.903862] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:08:05.749 [2024-07-25 05:14:09.903975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62970 ] 00:08:06.008 [2024-07-25 05:14:10.045875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.008 [2024-07-25 05:14:10.105457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.946 05:14:10 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.946 05:14:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:08:06.946 05:14:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62970 00:08:06.946 05:14:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62970 00:08:06.946 05:14:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:07.204 05:14:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62970 00:08:07.204 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 62970 ']' 00:08:07.204 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 62970 00:08:07.204 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:08:07.204 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.204 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62970 00:08:07.204 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:07.204 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:07.204 killing process with pid 62970 00:08:07.204 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62970' 00:08:07.204 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 62970 00:08:07.204 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 62970 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62970 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 62970 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:07.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.463 ERROR: process (pid: 62970) is no longer running 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 62970 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 62970 ']' 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:07.463 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (62970) - No such process 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:07.463 00:08:07.463 real 0m1.777s 00:08:07.463 user 0m2.044s 00:08:07.463 sys 0m0.451s 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.463 ************************************ 00:08:07.463 END TEST default_locks 00:08:07.463 ************************************ 00:08:07.463 05:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:07.721 05:14:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:07.721 05:14:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.721 05:14:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.721 05:14:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:07.721 ************************************ 00:08:07.721 START TEST default_locks_via_rpc 00:08:07.721 ************************************ 00:08:07.721 05:14:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:08:07.721 05:14:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63029 00:08:07.721 05:14:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 63029 00:08:07.721 05:14:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:07.721 05:14:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63029 ']' 00:08:07.721 05:14:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.721 05:14:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.721 05:14:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.721 05:14:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.721 05:14:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.721 [2024-07-25 05:14:11.713074] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:08:07.721 [2024-07-25 05:14:11.713530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63029 ] 00:08:07.721 [2024-07-25 05:14:11.848350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.980 [2024-07-25 05:14:11.918216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.980 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.980 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:07.980 05:14:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:07.980 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.980 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.980 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.980 05:14:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:07.980 05:14:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:07.980 05:14:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:07.981 05:14:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:07.981 05:14:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:07.981 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.981 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.981 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.981 05:14:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 63029 00:08:07.981 05:14:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:07.981 05:14:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 63029 00:08:08.547 05:14:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 63029 00:08:08.547 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 63029 ']' 00:08:08.547 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 63029 00:08:08.547 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:08:08.547 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:08.547 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63029 00:08:08.547 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:08.547 killing process with pid 63029 00:08:08.547 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:08.547 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63029' 00:08:08.547 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 63029 00:08:08.547 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 63029 00:08:08.807 00:08:08.807 real 0m1.207s 00:08:08.807 user 0m1.267s 00:08:08.807 sys 0m0.438s 00:08:08.807 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.807 ************************************ 00:08:08.807 END TEST default_locks_via_rpc 00:08:08.807 ************************************ 00:08:08.807 05:14:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.807 05:14:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:08.807 05:14:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.807 05:14:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.807 05:14:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:08.807 ************************************ 00:08:08.807 START TEST non_locking_app_on_locked_coremask 00:08:08.807 ************************************ 00:08:08.807 05:14:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:08:08.807 05:14:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63085 00:08:08.807 05:14:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63085 /var/tmp/spdk.sock 00:08:08.807 05:14:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63085 ']' 00:08:08.807 05:14:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:08.807 05:14:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.807 05:14:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.807 05:14:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.807 05:14:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.807 05:14:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.807 [2024-07-25 05:14:12.973492] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:08:08.807 [2024-07-25 05:14:12.973585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63085 ] 00:08:09.066 [2024-07-25 05:14:13.105062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.066 [2024-07-25 05:14:13.166810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:10.027 05:14:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.027 05:14:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:10.027 05:14:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63113 00:08:10.027 05:14:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:10.027 05:14:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63113 /var/tmp/spdk2.sock 00:08:10.027 05:14:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63113 ']' 00:08:10.027 05:14:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:10.027 05:14:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.027 05:14:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:10.027 05:14:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.027 05:14:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.027 [2024-07-25 05:14:14.071778] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:08:10.027 [2024-07-25 05:14:14.071890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63113 ] 00:08:10.286 [2024-07-25 05:14:14.217332] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:10.286 [2024-07-25 05:14:14.217383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.286 [2024-07-25 05:14:14.331857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.221 05:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.221 05:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:11.221 05:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63085 00:08:11.221 05:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63085 00:08:11.221 05:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:11.787 05:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63085 00:08:11.787 05:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63085 ']' 00:08:11.787 05:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 63085 00:08:11.787 05:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:11.787 05:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.787 05:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63085 00:08:11.787 killing process with pid 63085 00:08:11.787 05:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:11.787 05:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:11.787 05:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63085' 00:08:11.787 05:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 63085 00:08:11.787 05:14:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 63085 00:08:12.353 05:14:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63113 00:08:12.353 05:14:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63113 ']' 00:08:12.353 05:14:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 63113 00:08:12.353 05:14:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:12.353 05:14:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.353 05:14:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63113 00:08:12.353 killing process with pid 63113 00:08:12.353 05:14:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.353 05:14:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.353 05:14:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63113' 00:08:12.353 05:14:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 63113 00:08:12.353 05:14:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 63113 00:08:12.611 ************************************ 00:08:12.611 END TEST non_locking_app_on_locked_coremask 00:08:12.611 ************************************ 00:08:12.611 00:08:12.611 real 0m3.777s 00:08:12.611 user 0m4.593s 00:08:12.612 sys 0m0.898s 00:08:12.612 05:14:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.612 05:14:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:12.612 05:14:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:12.612 05:14:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:12.612 05:14:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.612 05:14:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:12.612 ************************************ 00:08:12.612 START TEST locking_app_on_unlocked_coremask 00:08:12.612 ************************************ 00:08:12.612 05:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:08:12.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.612 05:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63187 00:08:12.612 05:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63187 /var/tmp/spdk.sock 00:08:12.612 05:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63187 ']' 00:08:12.612 05:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.612 05:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.612 05:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.612 05:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:12.612 05:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.612 05:14:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:12.870 [2024-07-25 05:14:16.816752] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:08:12.870 [2024-07-25 05:14:16.816891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63187 ] 00:08:12.870 [2024-07-25 05:14:16.965886] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:12.870 [2024-07-25 05:14:16.965966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.870 [2024-07-25 05:14:17.037289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:13.806 05:14:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.806 05:14:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:13.806 05:14:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63215 00:08:13.806 05:14:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63215 /var/tmp/spdk2.sock 00:08:13.806 05:14:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63215 ']' 00:08:13.806 05:14:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:13.806 05:14:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:13.806 05:14:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.806 05:14:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:13.806 05:14:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.806 05:14:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:13.806 [2024-07-25 05:14:17.785329] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:08:13.806 [2024-07-25 05:14:17.785434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63215 ] 00:08:13.806 [2024-07-25 05:14:17.930452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.065 [2024-07-25 05:14:18.049354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.024 05:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.024 05:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:15.024 05:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63215 00:08:15.024 05:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63215 00:08:15.024 05:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:15.591 05:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63187 00:08:15.591 05:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63187 ']' 00:08:15.591 05:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 63187 00:08:15.591 05:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:15.591 05:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:15.591 05:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63187 00:08:15.591 killing process with pid 63187 00:08:15.591 05:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:15.591 05:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:15.591 05:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63187' 00:08:15.591 05:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 63187 00:08:15.591 05:14:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 63187 00:08:16.159 05:14:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63215 00:08:16.159 05:14:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63215 ']' 00:08:16.159 05:14:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 63215 00:08:16.159 05:14:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:16.159 05:14:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.159 05:14:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63215 00:08:16.159 killing process with pid 63215 00:08:16.159 05:14:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.159 05:14:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.159 05:14:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63215' 00:08:16.159 05:14:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 63215 00:08:16.159 05:14:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 63215 00:08:16.418 00:08:16.418 real 0m3.709s 00:08:16.418 user 0m4.421s 00:08:16.418 sys 0m0.899s 00:08:16.418 05:14:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.418 ************************************ 00:08:16.418 END TEST locking_app_on_unlocked_coremask 00:08:16.418 ************************************ 00:08:16.418 05:14:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:16.418 05:14:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:16.418 05:14:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:16.418 05:14:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.418 05:14:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:16.418 ************************************ 00:08:16.418 START TEST locking_app_on_locked_coremask 00:08:16.418 ************************************ 00:08:16.418 05:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:16.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.418 05:14:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63288 00:08:16.418 05:14:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63288 /var/tmp/spdk.sock 00:08:16.418 05:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63288 ']' 00:08:16.418 05:14:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:16.418 05:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.418 05:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.418 05:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.418 05:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.418 05:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:16.418 [2024-07-25 05:14:20.547024] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:08:16.418 [2024-07-25 05:14:20.547108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63288 ] 00:08:16.676 [2024-07-25 05:14:20.684756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.676 [2024-07-25 05:14:20.757744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63316 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63316 /var/tmp/spdk2.sock 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 63316 /var/tmp/spdk2.sock 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:17.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 63316 /var/tmp/spdk2.sock 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63316 ']' 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.608 05:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:17.608 [2024-07-25 05:14:21.553699] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:08:17.608 [2024-07-25 05:14:21.553794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63316 ] 00:08:17.608 [2024-07-25 05:14:21.697444] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63288 has claimed it. 00:08:17.608 [2024-07-25 05:14:21.697524] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:18.174 ERROR: process (pid: 63316) is no longer running 00:08:18.174 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (63316) - No such process 00:08:18.174 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.174 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:18.174 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:18.174 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.174 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:18.174 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.174 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63288 00:08:18.174 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63288 00:08:18.174 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:18.740 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63288 00:08:18.740 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63288 ']' 00:08:18.740 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 63288 00:08:18.740 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:18.740 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.740 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63288 00:08:18.740 killing process with pid 63288 00:08:18.740 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:18.740 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:18.741 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63288' 00:08:18.741 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 63288 00:08:18.741 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 63288 00:08:18.999 ************************************ 00:08:18.999 END TEST locking_app_on_locked_coremask 00:08:18.999 ************************************ 00:08:18.999 00:08:18.999 real 0m2.496s 00:08:18.999 user 0m3.020s 00:08:18.999 sys 0m0.529s 00:08:18.999 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.999 05:14:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:18.999 05:14:23 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:18.999 05:14:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.999 05:14:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.999 05:14:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:18.999 ************************************ 00:08:18.999 START TEST locking_overlapped_coremask 00:08:18.999 ************************************ 00:08:18.999 05:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:18.999 05:14:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63368 00:08:18.999 05:14:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63368 /var/tmp/spdk.sock 00:08:18.999 05:14:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:18.999 05:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 63368 ']' 00:08:18.999 05:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.999 05:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.999 05:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.999 05:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.999 05:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:18.999 [2024-07-25 05:14:23.106199] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:08:18.999 [2024-07-25 05:14:23.106313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63368 ] 00:08:19.258 [2024-07-25 05:14:23.250497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:19.258 [2024-07-25 05:14:23.311412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.258 [2024-07-25 05:14:23.311504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.258 [2024-07-25 05:14:23.311511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63398 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63398 /var/tmp/spdk2.sock 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 63398 /var/tmp/spdk2.sock 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:20.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 63398 /var/tmp/spdk2.sock 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 63398 ']' 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.191 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.191 [2024-07-25 05:14:24.161155] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:08:20.191 [2024-07-25 05:14:24.161248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63398 ] 00:08:20.191 [2024-07-25 05:14:24.307588] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63368 has claimed it. 00:08:20.191 [2024-07-25 05:14:24.307658] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:20.757 ERROR: process (pid: 63398) is no longer running 00:08:20.757 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (63398) - No such process 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63368 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 63368 ']' 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 63368 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63368 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63368' 00:08:20.757 killing process with pid 63368 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 63368 00:08:20.757 05:14:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 63368 00:08:21.016 00:08:21.016 real 0m2.139s 00:08:21.016 user 0m6.141s 00:08:21.016 sys 0m0.346s 00:08:21.016 05:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.016 05:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:21.016 ************************************ 00:08:21.016 END TEST locking_overlapped_coremask 00:08:21.016 ************************************ 00:08:21.275 05:14:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:21.275 05:14:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.275 05:14:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.275 05:14:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:21.275 ************************************ 00:08:21.275 START TEST locking_overlapped_coremask_via_rpc 00:08:21.275 ************************************ 00:08:21.275 05:14:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:21.275 05:14:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63444 00:08:21.275 05:14:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:21.275 05:14:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63444 /var/tmp/spdk.sock 00:08:21.275 05:14:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63444 ']' 00:08:21.275 05:14:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.275 05:14:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.275 05:14:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.275 05:14:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.275 05:14:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.275 [2024-07-25 05:14:25.297129] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:08:21.275 [2024-07-25 05:14:25.297977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63444 ] 00:08:21.275 [2024-07-25 05:14:25.436390] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:21.275 [2024-07-25 05:14:25.436443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:21.533 [2024-07-25 05:14:25.498243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.533 [2024-07-25 05:14:25.498376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.533 [2024-07-25 05:14:25.498382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.098 05:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.098 05:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:22.098 05:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63474 00:08:22.098 05:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63474 /var/tmp/spdk2.sock 00:08:22.098 05:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:22.098 05:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63474 ']' 00:08:22.098 05:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:22.098 05:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.098 05:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:22.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:22.098 05:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.098 05:14:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.356 [2024-07-25 05:14:26.322877] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:08:22.356 [2024-07-25 05:14:26.323362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63474 ] 00:08:22.356 [2024-07-25 05:14:26.481162] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:22.356 [2024-07-25 05:14:26.481246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:22.614 [2024-07-25 05:14:26.656972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.614 [2024-07-25 05:14:26.660062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:22.614 [2024-07-25 05:14:26.660066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.178 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.178 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:23.178 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:23.178 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.178 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.178 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.178 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:23.178 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.436 [2024-07-25 05:14:27.363101] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63444 has claimed it. 00:08:23.436 2024/07/25 05:14:27 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:08:23.436 request: 00:08:23.436 { 00:08:23.436 "method": "framework_enable_cpumask_locks", 00:08:23.436 "params": {} 00:08:23.436 } 00:08:23.436 Got JSON-RPC error response 00:08:23.436 GoRPCClient: error on JSON-RPC call 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63444 /var/tmp/spdk.sock 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63444 ']' 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.436 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.772 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:23.772 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:23.772 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63474 /var/tmp/spdk2.sock 00:08:23.772 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63474 ']' 00:08:23.772 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:23.772 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.772 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:23.772 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.772 05:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.030 05:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.030 05:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:24.030 05:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:24.030 05:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:24.030 ************************************ 00:08:24.030 END TEST locking_overlapped_coremask_via_rpc 00:08:24.030 ************************************ 00:08:24.030 05:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:24.030 05:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:24.030 00:08:24.030 real 0m2.885s 00:08:24.030 user 0m1.582s 00:08:24.030 sys 0m0.226s 00:08:24.030 05:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.030 05:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.030 05:14:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:24.030 05:14:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63444 ]] 00:08:24.030 05:14:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63444 00:08:24.030 05:14:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 63444 ']' 00:08:24.030 05:14:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 63444 00:08:24.030 05:14:28 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:24.030 05:14:28 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.030 05:14:28 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63444 00:08:24.030 killing process with pid 63444 00:08:24.030 05:14:28 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:24.030 05:14:28 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:24.030 05:14:28 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63444' 00:08:24.030 05:14:28 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 63444 00:08:24.030 05:14:28 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 63444 00:08:24.288 05:14:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63474 ]] 00:08:24.288 05:14:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63474 00:08:24.288 05:14:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 63474 ']' 00:08:24.288 05:14:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 63474 00:08:24.288 05:14:28 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:24.288 05:14:28 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.288 05:14:28 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63474 00:08:24.288 05:14:28 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:24.288 05:14:28 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:24.288 05:14:28 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63474' 00:08:24.288 killing process with pid 63474 00:08:24.546 05:14:28 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 63474 00:08:24.546 05:14:28 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 63474 00:08:24.804 05:14:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:24.804 05:14:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:24.804 05:14:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63444 ]] 00:08:24.804 05:14:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63444 00:08:24.804 05:14:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 63444 ']' 00:08:24.804 Process with pid 63444 is not found 00:08:24.804 Process with pid 63474 is not found 00:08:24.804 05:14:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 63444 00:08:24.804 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (63444) - No such process 00:08:24.804 05:14:28 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 63444 is not found' 00:08:24.804 05:14:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63474 ]] 00:08:24.804 05:14:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63474 00:08:24.804 05:14:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 63474 ']' 00:08:24.804 05:14:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 63474 00:08:24.804 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (63474) - No such process 00:08:24.804 05:14:28 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 63474 is not found' 00:08:24.804 05:14:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:24.804 00:08:24.804 real 0m18.985s 00:08:24.804 user 0m35.956s 00:08:24.804 sys 0m4.432s 00:08:24.804 05:14:28 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.804 05:14:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:24.804 ************************************ 00:08:24.804 END TEST cpu_locks 00:08:24.804 ************************************ 00:08:24.804 ************************************ 00:08:24.804 END TEST event 00:08:24.804 ************************************ 00:08:24.804 00:08:24.804 real 0m45.323s 00:08:24.804 user 1m31.616s 00:08:24.804 sys 0m7.922s 00:08:24.804 05:14:28 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.804 05:14:28 event -- common/autotest_common.sh@10 -- # set +x 00:08:24.804 05:14:28 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:24.804 05:14:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.804 05:14:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.804 05:14:28 -- common/autotest_common.sh@10 -- # set +x 00:08:24.804 ************************************ 00:08:24.804 START TEST thread 00:08:24.804 ************************************ 00:08:24.804 05:14:28 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:24.804 * Looking for test storage... 00:08:24.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:24.804 05:14:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:24.804 05:14:28 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:24.804 05:14:28 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.804 05:14:28 thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.804 ************************************ 00:08:24.804 START TEST thread_poller_perf 00:08:24.804 ************************************ 00:08:24.804 05:14:28 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:24.804 [2024-07-25 05:14:28.904157] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:08:24.804 [2024-07-25 05:14:28.904251] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63626 ] 00:08:25.061 [2024-07-25 05:14:29.037200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.061 [2024-07-25 05:14:29.110211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.061 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:26.434 ====================================== 00:08:26.434 busy:2208183776 (cyc) 00:08:26.434 total_run_count: 288000 00:08:26.434 tsc_hz: 2200000000 (cyc) 00:08:26.434 ====================================== 00:08:26.434 poller_cost: 7667 (cyc), 3485 (nsec) 00:08:26.434 00:08:26.434 real 0m1.315s 00:08:26.434 user 0m1.166s 00:08:26.434 sys 0m0.041s 00:08:26.434 05:14:30 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.434 05:14:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:26.434 ************************************ 00:08:26.434 END TEST thread_poller_perf 00:08:26.434 ************************************ 00:08:26.434 05:14:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:26.434 05:14:30 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:26.434 05:14:30 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.434 05:14:30 thread -- common/autotest_common.sh@10 -- # set +x 00:08:26.434 ************************************ 00:08:26.434 START TEST thread_poller_perf 00:08:26.434 ************************************ 00:08:26.434 05:14:30 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:26.434 [2024-07-25 05:14:30.268851] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:08:26.434 [2024-07-25 05:14:30.268967] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63656 ] 00:08:26.434 [2024-07-25 05:14:30.404363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.434 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:26.434 [2024-07-25 05:14:30.466371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.369 ====================================== 00:08:27.369 busy:2202117008 (cyc) 00:08:27.369 total_run_count: 3772000 00:08:27.369 tsc_hz: 2200000000 (cyc) 00:08:27.369 ====================================== 00:08:27.369 poller_cost: 583 (cyc), 265 (nsec) 00:08:27.369 00:08:27.369 real 0m1.293s 00:08:27.369 user 0m1.139s 00:08:27.369 sys 0m0.045s 00:08:27.628 ************************************ 00:08:27.628 END TEST thread_poller_perf 00:08:27.628 ************************************ 00:08:27.628 05:14:31 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.628 05:14:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:27.628 05:14:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:27.628 ************************************ 00:08:27.628 END TEST thread 00:08:27.628 ************************************ 00:08:27.628 00:08:27.628 real 0m2.775s 00:08:27.628 user 0m2.378s 00:08:27.628 sys 0m0.177s 00:08:27.628 05:14:31 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.628 05:14:31 thread -- common/autotest_common.sh@10 -- # set +x 00:08:27.628 05:14:31 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:08:27.628 05:14:31 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:27.628 05:14:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.628 05:14:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.628 05:14:31 -- common/autotest_common.sh@10 -- # set +x 00:08:27.628 ************************************ 00:08:27.628 START TEST app_cmdline 00:08:27.628 ************************************ 00:08:27.628 05:14:31 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:27.628 * Looking for test storage... 00:08:27.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:27.628 05:14:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:27.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.628 05:14:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=63730 00:08:27.628 05:14:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 63730 00:08:27.628 05:14:31 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:27.628 05:14:31 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 63730 ']' 00:08:27.628 05:14:31 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.628 05:14:31 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.628 05:14:31 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.628 05:14:31 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.628 05:14:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:27.628 [2024-07-25 05:14:31.777498] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:08:27.628 [2024-07-25 05:14:31.777613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63730 ] 00:08:27.887 [2024-07-25 05:14:31.917816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.887 [2024-07-25 05:14:31.989185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.821 05:14:32 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.821 05:14:32 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:28.821 05:14:32 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:29.080 { 00:08:29.080 "fields": { 00:08:29.080 "commit": "6a0934c18", 00:08:29.080 "major": 24, 00:08:29.080 "minor": 9, 00:08:29.080 "patch": 0, 00:08:29.080 "suffix": "-pre" 00:08:29.080 }, 00:08:29.080 "version": "SPDK v24.09-pre git sha1 6a0934c18" 00:08:29.080 } 00:08:29.080 05:14:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:29.080 05:14:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:29.080 05:14:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:29.080 05:14:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:29.080 05:14:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:29.080 05:14:33 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.080 05:14:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:29.080 05:14:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:29.080 05:14:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:29.080 05:14:33 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.080 05:14:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:29.080 05:14:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:29.080 05:14:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:29.080 05:14:33 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:29.080 05:14:33 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:29.080 05:14:33 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.080 05:14:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.080 05:14:33 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.080 05:14:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.080 05:14:33 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.080 05:14:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.080 05:14:33 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.080 05:14:33 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:29.080 05:14:33 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:29.345 2024/07/25 05:14:33 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:08:29.345 request: 00:08:29.345 { 00:08:29.345 "method": "env_dpdk_get_mem_stats", 00:08:29.345 "params": {} 00:08:29.345 } 00:08:29.348 Got JSON-RPC error response 00:08:29.348 GoRPCClient: error on JSON-RPC call 00:08:29.348 05:14:33 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:29.348 05:14:33 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:29.348 05:14:33 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:29.348 05:14:33 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:29.348 05:14:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 63730 00:08:29.348 05:14:33 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 63730 ']' 00:08:29.348 05:14:33 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 63730 00:08:29.348 05:14:33 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:29.348 05:14:33 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.348 05:14:33 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63730 00:08:29.348 05:14:33 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:29.348 05:14:33 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:29.348 killing process with pid 63730 00:08:29.348 05:14:33 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63730' 00:08:29.348 05:14:33 app_cmdline -- common/autotest_common.sh@969 -- # kill 63730 00:08:29.348 05:14:33 app_cmdline -- common/autotest_common.sh@974 -- # wait 63730 00:08:29.611 ************************************ 00:08:29.611 END TEST app_cmdline 00:08:29.611 ************************************ 00:08:29.611 00:08:29.611 real 0m2.034s 00:08:29.611 user 0m2.731s 00:08:29.611 sys 0m0.375s 00:08:29.611 05:14:33 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.611 05:14:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:29.611 05:14:33 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:29.611 05:14:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.611 05:14:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.611 05:14:33 -- common/autotest_common.sh@10 -- # set +x 00:08:29.611 ************************************ 00:08:29.611 START TEST version 00:08:29.611 ************************************ 00:08:29.611 05:14:33 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:29.611 * Looking for test storage... 00:08:29.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:29.611 05:14:33 version -- app/version.sh@17 -- # get_header_version major 00:08:29.611 05:14:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:29.611 05:14:33 version -- app/version.sh@14 -- # cut -f2 00:08:29.611 05:14:33 version -- app/version.sh@14 -- # tr -d '"' 00:08:29.611 05:14:33 version -- app/version.sh@17 -- # major=24 00:08:29.611 05:14:33 version -- app/version.sh@18 -- # get_header_version minor 00:08:29.611 05:14:33 version -- app/version.sh@14 -- # cut -f2 00:08:29.611 05:14:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:29.611 05:14:33 version -- app/version.sh@14 -- # tr -d '"' 00:08:29.611 05:14:33 version -- app/version.sh@18 -- # minor=9 00:08:29.611 05:14:33 version -- app/version.sh@19 -- # get_header_version patch 00:08:29.611 05:14:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:29.611 05:14:33 version -- app/version.sh@14 -- # cut -f2 00:08:29.611 05:14:33 version -- app/version.sh@14 -- # tr -d '"' 00:08:29.611 05:14:33 version -- app/version.sh@19 -- # patch=0 00:08:29.611 05:14:33 version -- app/version.sh@20 -- # get_header_version suffix 00:08:29.870 05:14:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:29.870 05:14:33 version -- app/version.sh@14 -- # cut -f2 00:08:29.870 05:14:33 version -- app/version.sh@14 -- # tr -d '"' 00:08:29.870 05:14:33 version -- app/version.sh@20 -- # suffix=-pre 00:08:29.870 05:14:33 version -- app/version.sh@22 -- # version=24.9 00:08:29.870 05:14:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:29.870 05:14:33 version -- app/version.sh@28 -- # version=24.9rc0 00:08:29.870 05:14:33 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:29.870 05:14:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:29.870 05:14:33 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:29.870 05:14:33 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:29.870 00:08:29.870 real 0m0.135s 00:08:29.870 user 0m0.080s 00:08:29.870 sys 0m0.081s 00:08:29.870 05:14:33 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.870 05:14:33 version -- common/autotest_common.sh@10 -- # set +x 00:08:29.870 ************************************ 00:08:29.870 END TEST version 00:08:29.870 ************************************ 00:08:29.870 05:14:33 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:08:29.870 05:14:33 -- spdk/autotest.sh@202 -- # uname -s 00:08:29.870 05:14:33 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:08:29.871 05:14:33 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:29.871 05:14:33 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:29.871 05:14:33 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:08:29.871 05:14:33 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:08:29.871 05:14:33 -- spdk/autotest.sh@264 -- # timing_exit lib 00:08:29.871 05:14:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.871 05:14:33 -- common/autotest_common.sh@10 -- # set +x 00:08:29.871 05:14:33 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:08:29.871 05:14:33 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:08:29.871 05:14:33 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:08:29.871 05:14:33 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:08:29.871 05:14:33 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:08:29.871 05:14:33 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:08:29.871 05:14:33 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:29.871 05:14:33 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:29.871 05:14:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.871 05:14:33 -- common/autotest_common.sh@10 -- # set +x 00:08:29.871 ************************************ 00:08:29.871 START TEST nvmf_tcp 00:08:29.871 ************************************ 00:08:29.871 05:14:33 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:29.871 * Looking for test storage... 00:08:29.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:29.871 05:14:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:29.871 05:14:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:29.871 05:14:33 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:29.871 05:14:33 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:29.871 05:14:33 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.871 05:14:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.871 ************************************ 00:08:29.871 START TEST nvmf_target_core 00:08:29.871 ************************************ 00:08:29.871 05:14:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:30.131 * Looking for test storage... 00:08:30.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.131 ************************************ 00:08:30.131 START TEST nvmf_abort 00:08:30.131 ************************************ 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:30.131 * Looking for test storage... 00:08:30.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.131 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:30.132 Cannot find device "nvmf_init_br" 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # true 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:30.132 Cannot find device "nvmf_tgt_br" 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # true 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:30.132 Cannot find device "nvmf_tgt_br2" 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # true 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:30.132 Cannot find device "nvmf_init_br" 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # true 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:30.132 Cannot find device "nvmf_tgt_br" 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # true 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:30.132 Cannot find device "nvmf_tgt_br2" 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # true 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:30.132 Cannot find device "nvmf_br" 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # true 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:30.132 Cannot find device "nvmf_init_if" 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@161 -- # true 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:30.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.132 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:08:30.133 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:30.133 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.133 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:08:30.133 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:30.133 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:30.133 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:30.133 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:30.133 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:30.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:08:30.392 00:08:30.392 --- 10.0.0.2 ping statistics --- 00:08:30.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.392 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:30.392 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:30.392 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:08:30.392 00:08:30.392 --- 10.0.0.3 ping statistics --- 00:08:30.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.392 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:30.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:08:30.392 00:08:30.392 --- 10.0.0.1 ping statistics --- 00:08:30.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.392 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=64100 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 64100 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 64100 ']' 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.392 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.651 [2024-07-25 05:14:34.627252] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:08:30.651 [2024-07-25 05:14:34.627366] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.651 [2024-07-25 05:14:34.768087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:30.912 [2024-07-25 05:14:34.829525] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.912 [2024-07-25 05:14:34.829584] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.912 [2024-07-25 05:14:34.829596] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.912 [2024-07-25 05:14:34.829605] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.912 [2024-07-25 05:14:34.829612] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.912 [2024-07-25 05:14:34.829852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.912 [2024-07-25 05:14:34.830293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.912 [2024-07-25 05:14:34.830293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.912 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.912 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:08:30.912 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:30.912 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:30.912 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.912 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.912 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:30.912 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.912 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.912 [2024-07-25 05:14:34.949659] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.912 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.912 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:30.912 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.912 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.912 Malloc0 00:08:30.912 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.912 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:30.912 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.912 05:14:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.912 Delay0 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.912 [2024-07-25 05:14:35.022098] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.912 05:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:31.171 [2024-07-25 05:14:35.203195] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:33.074 Initializing NVMe Controllers 00:08:33.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:33.074 controller IO queue size 128 less than required 00:08:33.074 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:33.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:33.074 Initialization complete. Launching workers. 00:08:33.074 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 22151 00:08:33.074 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 22212, failed to submit 62 00:08:33.074 success 22155, unsuccess 57, failed 0 00:08:33.074 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:33.074 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.074 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.074 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.074 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:33.074 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:33.074 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:33.074 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.333 rmmod nvme_tcp 00:08:33.333 rmmod nvme_fabrics 00:08:33.333 rmmod nvme_keyring 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 64100 ']' 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 64100 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 64100 ']' 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 64100 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64100 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:33.333 killing process with pid 64100 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64100' 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 64100 00:08:33.333 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 64100 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:33.592 ************************************ 00:08:33.592 END TEST nvmf_abort 00:08:33.592 ************************************ 00:08:33.592 00:08:33.592 real 0m3.485s 00:08:33.592 user 0m9.819s 00:08:33.592 sys 0m0.897s 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.592 ************************************ 00:08:33.592 START TEST nvmf_ns_hotplug_stress 00:08:33.592 ************************************ 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:33.592 * Looking for test storage... 00:08:33.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.592 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:33.593 Cannot find device "nvmf_tgt_br" 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:33.593 Cannot find device "nvmf_tgt_br2" 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:33.593 Cannot find device "nvmf_tgt_br" 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:33.593 Cannot find device "nvmf_tgt_br2" 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:08:33.593 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.853 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.853 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:33.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:08:33.853 00:08:33.853 --- 10.0.0.2 ping statistics --- 00:08:33.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.853 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:33.853 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:33.853 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:08:33.853 00:08:33.853 --- 10.0.0.3 ping statistics --- 00:08:33.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.853 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:33.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:33.853 00:08:33.853 --- 10.0.0.1 ping statistics --- 00:08:33.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.853 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:33.853 05:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:33.853 05:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:33.853 05:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:33.853 05:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:33.853 05:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.853 05:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=64330 00:08:33.853 05:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 64330 00:08:33.853 05:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:33.853 05:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 64330 ']' 00:08:33.854 05:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.854 05:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.854 05:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.854 05:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.854 05:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.112 [2024-07-25 05:14:38.074633] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:08:34.112 [2024-07-25 05:14:38.074724] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.112 [2024-07-25 05:14:38.217027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:34.112 [2024-07-25 05:14:38.283020] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.112 [2024-07-25 05:14:38.283262] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.112 [2024-07-25 05:14:38.283407] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.112 [2024-07-25 05:14:38.283525] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.112 [2024-07-25 05:14:38.283648] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.112 [2024-07-25 05:14:38.283859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.112 [2024-07-25 05:14:38.283918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.112 [2024-07-25 05:14:38.283920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.047 05:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.048 05:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:08:35.048 05:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:35.048 05:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:35.048 05:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.048 05:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.048 05:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:35.048 05:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:35.306 [2024-07-25 05:14:39.322550] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.306 05:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:35.564 05:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.131 [2024-07-25 05:14:40.001024] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.131 05:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:36.390 05:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:36.648 Malloc0 00:08:36.648 05:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:36.907 Delay0 00:08:36.907 05:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.165 05:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:37.435 NULL1 00:08:37.435 05:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:37.714 05:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=64467 00:08:37.714 05:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:37.714 05:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.714 05:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:39.089 Read completed with error (sct=0, sc=11) 00:08:39.089 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.346 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:39.346 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:39.603 true 00:08:39.603 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:39.603 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.190 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.758 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:40.758 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:40.758 true 00:08:41.016 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:41.016 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.274 05:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.532 05:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:41.532 05:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:41.789 true 00:08:41.789 05:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:41.789 05:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.048 05:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.306 05:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:42.306 05:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:42.565 true 00:08:42.565 05:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:42.565 05:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.841 05:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.099 05:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:43.099 05:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:43.358 true 00:08:43.358 05:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:43.358 05:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.293 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.551 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:44.551 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:44.809 true 00:08:44.809 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:44.809 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.067 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.325 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:45.325 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:45.582 true 00:08:45.582 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:45.582 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.840 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.098 05:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:46.098 05:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:46.356 true 00:08:46.356 05:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:46.356 05:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.290 05:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.548 05:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:47.548 05:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:47.805 true 00:08:47.805 05:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:47.805 05:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.370 05:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.370 05:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:48.371 05:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:48.629 true 00:08:48.629 05:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:48.629 05:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.887 05:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.181 05:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:49.181 05:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:49.438 true 00:08:49.438 05:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:49.438 05:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.369 05:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.627 05:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:50.627 05:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:50.884 true 00:08:50.884 05:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:50.884 05:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.141 05:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.399 05:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:51.399 05:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:51.657 true 00:08:51.657 05:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:51.657 05:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.914 05:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.172 05:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:52.172 05:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:52.430 true 00:08:52.430 05:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:52.430 05:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.364 05:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.622 05:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:53.622 05:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:53.880 true 00:08:53.880 05:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:53.880 05:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.138 05:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.398 05:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:54.398 05:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:54.657 true 00:08:54.657 05:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:54.657 05:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.940 05:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.198 05:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:55.198 05:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:55.457 true 00:08:55.457 05:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:55.457 05:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.392 05:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.651 05:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:56.651 05:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:56.910 true 00:08:56.910 05:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:56.910 05:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.168 05:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.426 05:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:57.426 05:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:57.684 true 00:08:57.684 05:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:57.684 05:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.942 05:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.505 05:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:58.505 05:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:58.505 true 00:08:58.505 05:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:58.505 05:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.437 05:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.694 05:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:59.694 05:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:59.952 true 00:08:59.952 05:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:08:59.952 05:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.212 05:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.471 05:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:00.471 05:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:01.037 true 00:09:01.037 05:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:09:01.037 05:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.316 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.574 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:01.574 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:01.832 true 00:09:01.832 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:09:01.832 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.090 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.348 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:02.348 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:02.605 true 00:09:02.605 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:09:02.605 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.538 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.796 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:03.796 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:04.054 true 00:09:04.054 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:09:04.054 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.312 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.572 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:04.572 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:04.830 true 00:09:05.088 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:09:05.088 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.347 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.347 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:05.347 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:05.605 true 00:09:05.605 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:09:05.605 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.948 05:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.514 05:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:06.515 05:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:06.772 true 00:09:06.772 05:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:09:06.772 05:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.339 05:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.906 05:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:07.906 05:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:07.906 true 00:09:07.906 05:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:09:07.906 05:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.906 Initializing NVMe Controllers 00:09:07.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:07.906 Controller IO queue size 128, less than required. 00:09:07.906 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:07.906 Controller IO queue size 128, less than required. 00:09:07.906 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:07.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:07.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:07.906 Initialization complete. Launching workers. 00:09:07.906 ======================================================== 00:09:07.906 Latency(us) 00:09:07.906 Device Information : IOPS MiB/s Average min max 00:09:07.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 315.85 0.15 148586.12 3645.77 1112900.83 00:09:07.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7346.15 3.59 17423.90 3002.41 721122.80 00:09:07.906 ======================================================== 00:09:07.906 Total : 7662.00 3.74 22830.77 3002.41 1112900.83 00:09:07.906 00:09:08.164 05:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.423 05:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:08.423 05:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:08.682 true 00:09:08.682 05:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64467 00:09:08.682 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (64467) - No such process 00:09:08.682 05:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 64467 00:09:08.682 05:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.940 05:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:09.198 05:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:09.198 05:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:09.198 05:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:09.198 05:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:09.198 05:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:09.454 null0 00:09:09.454 05:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:09.454 05:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:09.454 05:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:09.712 null1 00:09:09.712 05:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:09.712 05:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:09.712 05:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:09.970 null2 00:09:09.970 05:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:09.970 05:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:09.971 05:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:10.228 null3 00:09:10.228 05:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:10.228 05:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:10.228 05:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:10.486 null4 00:09:10.486 05:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:10.486 05:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:10.486 05:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:10.761 null5 00:09:10.761 05:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:10.761 05:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:10.761 05:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:11.021 null6 00:09:11.021 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:11.021 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.021 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:11.280 null7 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.280 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 65532 65533 65535 65538 65540 65541 65543 65545 00:09:11.538 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.538 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:11.538 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:11.796 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:11.796 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:11.796 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:11.796 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:11.796 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:11.796 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.796 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.796 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:12.054 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.054 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.054 05:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:12.054 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.054 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.054 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:12.054 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.054 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.054 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:12.054 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.055 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.055 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:12.055 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:12.312 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.312 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.312 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:12.312 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.312 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.312 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:12.312 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.312 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.312 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:12.312 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.312 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:12.571 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.571 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.571 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:12.571 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:12.571 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:12.571 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.571 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.571 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:12.571 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:12.571 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:12.829 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.829 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.829 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:12.829 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:12.829 05:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:13.087 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.087 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.087 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:13.087 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:13.087 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.087 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.087 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:13.087 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.087 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.087 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.087 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:13.087 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.087 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.087 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:13.087 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.088 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.088 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:13.088 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.088 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.088 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:13.348 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:13.348 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.348 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.348 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:13.348 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:13.348 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.348 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.348 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:13.612 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:13.612 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.612 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:13.612 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:13.612 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.612 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.612 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:13.613 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.870 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:13.870 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.870 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.870 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:13.870 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.870 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.870 05:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:13.870 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.870 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.871 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.129 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.129 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.129 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:14.129 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.129 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.129 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.129 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.129 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.129 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.129 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.129 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:14.387 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.387 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.387 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:14.387 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.387 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:14.387 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:14.387 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:14.646 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.646 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.646 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.646 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:14.646 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.646 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.646 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.646 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:14.903 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.903 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.903 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.903 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.903 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.903 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.904 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.904 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.904 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:14.904 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.904 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.904 05:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:14.904 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.904 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.904 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.904 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:14.904 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.161 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.161 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.161 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.161 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.161 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.161 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.161 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.419 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.419 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.419 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.419 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.419 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.419 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.419 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.677 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.677 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.677 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.677 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.677 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.677 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.677 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.677 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.677 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.677 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.677 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.677 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.677 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.677 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.935 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.935 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.935 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.935 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.935 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.935 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.935 05:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.935 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.935 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.935 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.935 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.193 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.193 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.193 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.193 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.193 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.193 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.193 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.193 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.450 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.450 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.450 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.450 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.450 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.450 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.450 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.450 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.450 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.450 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.450 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.450 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.450 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.450 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.450 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.450 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.708 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.708 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.708 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.708 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.708 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.708 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.708 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.708 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.708 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.968 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.968 05:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.968 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.968 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.968 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.968 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.968 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.968 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.237 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.237 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.237 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.237 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.237 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.237 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.237 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.237 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.237 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.237 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.237 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.237 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.237 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.237 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.237 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.237 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.495 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.495 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.495 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.495 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.495 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.495 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.495 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.495 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.495 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.495 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.495 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.495 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.495 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.753 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.753 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.753 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.753 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.753 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.753 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.753 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.753 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.753 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.753 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.753 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.753 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.753 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.753 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.753 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.010 05:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.010 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.010 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.010 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.010 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.010 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.010 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.010 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.010 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.010 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.010 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.010 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.268 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.268 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.268 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.268 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.268 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.268 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.268 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.268 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.268 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.268 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.268 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.268 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.525 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.525 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.525 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.525 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.525 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.525 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.525 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.525 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.782 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.782 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.782 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.782 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.783 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.783 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.783 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.783 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.783 05:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.041 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.041 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.041 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.041 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.310 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.310 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.310 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.310 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.310 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:19.310 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:19.310 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.310 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:19.310 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.310 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:19.310 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.310 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.310 rmmod nvme_tcp 00:09:19.571 rmmod nvme_fabrics 00:09:19.571 rmmod nvme_keyring 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 64330 ']' 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 64330 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 64330 ']' 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 64330 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64330 00:09:19.571 killing process with pid 64330 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64330' 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 64330 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 64330 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:19.571 00:09:19.571 real 0m46.121s 00:09:19.571 user 3m52.920s 00:09:19.571 sys 0m13.760s 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.571 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:19.571 ************************************ 00:09:19.571 END TEST nvmf_ns_hotplug_stress 00:09:19.571 ************************************ 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.830 ************************************ 00:09:19.830 START TEST nvmf_delete_subsystem 00:09:19.830 ************************************ 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:19.830 * Looking for test storage... 00:09:19.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.830 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:19.831 Cannot find device "nvmf_tgt_br" 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:19.831 Cannot find device "nvmf_tgt_br2" 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:19.831 Cannot find device "nvmf_tgt_br" 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:19.831 Cannot find device "nvmf_tgt_br2" 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:19.831 05:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:20.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:20.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:20.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:09:20.090 00:09:20.090 --- 10.0.0.2 ping statistics --- 00:09:20.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.090 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:20.090 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:20.090 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:09:20.090 00:09:20.090 --- 10.0.0.3 ping statistics --- 00:09:20.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.090 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:20.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:09:20.090 00:09:20.090 --- 10.0.0.1 ping statistics --- 00:09:20.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.090 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.090 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=66894 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 66894 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 66894 ']' 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.091 05:15:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.388 [2024-07-25 05:15:24.322877] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:09:20.388 [2024-07-25 05:15:24.323027] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.388 [2024-07-25 05:15:24.485194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:20.665 [2024-07-25 05:15:24.563706] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.665 [2024-07-25 05:15:24.563767] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.665 [2024-07-25 05:15:24.563779] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.665 [2024-07-25 05:15:24.563788] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.665 [2024-07-25 05:15:24.563795] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.665 [2024-07-25 05:15:24.563987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.665 [2024-07-25 05:15:24.563993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.230 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.230 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:09:21.230 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:21.230 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:21.231 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:21.231 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.231 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.231 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.231 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:21.231 [2024-07-25 05:15:25.384634] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.231 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.231 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:21.231 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.231 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:21.231 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.231 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.231 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.231 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:21.231 [2024-07-25 05:15:25.400753] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.231 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.231 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:21.231 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.231 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:21.489 NULL1 00:09:21.489 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.489 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:21.489 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.489 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:21.489 Delay0 00:09:21.489 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.489 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.489 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.489 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:21.489 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.489 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=66945 00:09:21.489 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:21.489 05:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:21.489 [2024-07-25 05:15:25.595379] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:23.387 05:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.387 05:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.387 05:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 starting I/O failed: -6 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 starting I/O failed: -6 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 starting I/O failed: -6 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 starting I/O failed: -6 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 starting I/O failed: -6 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 starting I/O failed: -6 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 starting I/O failed: -6 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 starting I/O failed: -6 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 starting I/O failed: -6 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 starting I/O failed: -6 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 starting I/O failed: -6 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 [2024-07-25 05:15:27.629092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1d910 is same with the state(5) to be set 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Read completed with error (sct=0, sc=8) 00:09:23.646 Write completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 starting I/O failed: -6 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 starting I/O failed: -6 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 starting I/O failed: -6 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 starting I/O failed: -6 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 starting I/O failed: -6 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 starting I/O failed: -6 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 starting I/O failed: -6 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 starting I/O failed: -6 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 starting I/O failed: -6 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 [2024-07-25 05:15:27.632557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fada4000c00 is same with the state(5) to be set 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Write completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:23.647 Read completed with error (sct=0, sc=8) 00:09:24.579 [2024-07-25 05:15:28.611625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefe510 is same with the state(5) to be set 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 [2024-07-25 05:15:28.631103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf21a80 is same with the state(5) to be set 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 [2024-07-25 05:15:28.631383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf206c0 is same with the state(5) to be set 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 [2024-07-25 05:15:28.632006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fada400d000 is same with the state(5) to be set 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 Write completed with error (sct=0, sc=8) 00:09:24.579 Read completed with error (sct=0, sc=8) 00:09:24.579 [2024-07-25 05:15:28.633377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fada400d660 is same with the state(5) to be set 00:09:24.579 Initializing NVMe Controllers 00:09:24.579 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:24.579 Controller IO queue size 128, less than required. 00:09:24.579 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:24.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:24.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:24.579 Initialization complete. Launching workers. 00:09:24.579 ======================================================== 00:09:24.579 Latency(us) 00:09:24.579 Device Information : IOPS MiB/s Average min max 00:09:24.579 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.48 0.08 903703.04 453.09 1045865.74 00:09:24.579 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.08 0.07 986481.54 1068.96 2002304.10 00:09:24.579 ======================================================== 00:09:24.579 Total : 316.57 0.15 942948.11 453.09 2002304.10 00:09:24.579 00:09:24.579 [2024-07-25 05:15:28.633815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefe510 (9): Bad file descriptor 00:09:24.579 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:24.579 05:15:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.579 05:15:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:24.579 05:15:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 66945 00:09:24.579 05:15:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 66945 00:09:25.146 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (66945) - No such process 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 66945 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 66945 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 66945 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.146 [2024-07-25 05:15:29.154111] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=66996 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66996 00:09:25.146 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:25.404 [2024-07-25 05:15:29.342701] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:25.663 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:25.663 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66996 00:09:25.663 05:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:26.229 05:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:26.229 05:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66996 00:09:26.229 05:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:26.795 05:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:26.795 05:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66996 00:09:26.795 05:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:27.053 05:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:27.053 05:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66996 00:09:27.053 05:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:27.619 05:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:27.619 05:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66996 00:09:27.619 05:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:28.221 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:28.221 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66996 00:09:28.221 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:28.479 Initializing NVMe Controllers 00:09:28.479 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:28.479 Controller IO queue size 128, less than required. 00:09:28.479 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:28.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:28.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:28.479 Initialization complete. Launching workers. 00:09:28.479 ======================================================== 00:09:28.479 Latency(us) 00:09:28.479 Device Information : IOPS MiB/s Average min max 00:09:28.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005699.60 1000226.99 1040598.31 00:09:28.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003508.76 1000171.10 1016048.06 00:09:28.479 ======================================================== 00:09:28.479 Total : 256.00 0.12 1004604.18 1000171.10 1040598.31 00:09:28.479 00:09:28.737 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:28.737 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66996 00:09:28.737 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (66996) - No such process 00:09:28.737 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 66996 00:09:28.737 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:28.737 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:28.737 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:28.738 rmmod nvme_tcp 00:09:28.738 rmmod nvme_fabrics 00:09:28.738 rmmod nvme_keyring 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 66894 ']' 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 66894 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 66894 ']' 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 66894 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66894 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:28.738 killing process with pid 66894 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66894' 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 66894 00:09:28.738 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 66894 00:09:28.997 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:28.997 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:28.997 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:28.997 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:28.997 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:28.997 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.997 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.997 05:15:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:28.997 00:09:28.997 real 0m9.214s 00:09:28.997 user 0m28.616s 00:09:28.997 sys 0m1.497s 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:28.997 ************************************ 00:09:28.997 END TEST nvmf_delete_subsystem 00:09:28.997 ************************************ 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.997 ************************************ 00:09:28.997 START TEST nvmf_host_management 00:09:28.997 ************************************ 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:28.997 * Looking for test storage... 00:09:28.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.997 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:28.998 Cannot find device "nvmf_tgt_br" 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:09:28.998 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:29.256 Cannot find device "nvmf_tgt_br2" 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:29.256 Cannot find device "nvmf_tgt_br" 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:29.256 Cannot find device "nvmf_tgt_br2" 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:29.256 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:29.256 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:29.256 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:29.257 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:29.257 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:29.257 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:29.257 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:29.514 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:29.514 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:29.514 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:29.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:09:29.514 00:09:29.514 --- 10.0.0.2 ping statistics --- 00:09:29.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.514 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:29.514 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:29.514 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:29.514 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:09:29.514 00:09:29.514 --- 10.0.0.3 ping statistics --- 00:09:29.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.514 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:29.514 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:29.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:29.514 00:09:29.514 --- 10.0.0.1 ping statistics --- 00:09:29.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.514 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:29.514 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.514 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:09:29.514 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:29.514 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.514 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:29.514 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:29.514 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.514 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:29.514 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:29.514 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:29.514 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:29.515 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:29.515 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:29.515 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.515 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:29.515 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=67227 00:09:29.515 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:29.515 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 67227 00:09:29.515 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 67227 ']' 00:09:29.515 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.515 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.515 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.515 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.515 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:29.515 [2024-07-25 05:15:33.540122] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:09:29.515 [2024-07-25 05:15:33.540217] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.515 [2024-07-25 05:15:33.673752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.772 [2024-07-25 05:15:33.738593] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.772 [2024-07-25 05:15:33.738688] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.772 [2024-07-25 05:15:33.738710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.772 [2024-07-25 05:15:33.738727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.772 [2024-07-25 05:15:33.738742] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.772 [2024-07-25 05:15:33.738888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.772 [2024-07-25 05:15:33.739366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.772 [2024-07-25 05:15:33.739456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:29.772 [2024-07-25 05:15:33.739513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 [2024-07-25 05:15:33.864798] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.772 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 Malloc0 00:09:29.772 [2024-07-25 05:15:33.937567] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=67280 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 67280 /var/tmp/bdevperf.sock 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 67280 ']' 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:30.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:30.031 { 00:09:30.031 "params": { 00:09:30.031 "name": "Nvme$subsystem", 00:09:30.031 "trtype": "$TEST_TRANSPORT", 00:09:30.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.031 "adrfam": "ipv4", 00:09:30.031 "trsvcid": "$NVMF_PORT", 00:09:30.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.031 "hdgst": ${hdgst:-false}, 00:09:30.031 "ddgst": ${ddgst:-false} 00:09:30.031 }, 00:09:30.031 "method": "bdev_nvme_attach_controller" 00:09:30.031 } 00:09:30.031 EOF 00:09:30.031 )") 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:30.031 05:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:30.031 "params": { 00:09:30.031 "name": "Nvme0", 00:09:30.031 "trtype": "tcp", 00:09:30.031 "traddr": "10.0.0.2", 00:09:30.031 "adrfam": "ipv4", 00:09:30.031 "trsvcid": "4420", 00:09:30.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:30.031 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:30.031 "hdgst": false, 00:09:30.031 "ddgst": false 00:09:30.031 }, 00:09:30.031 "method": "bdev_nvme_attach_controller" 00:09:30.031 }' 00:09:30.031 [2024-07-25 05:15:34.043313] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:09:30.031 [2024-07-25 05:15:34.043418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67280 ] 00:09:30.031 [2024-07-25 05:15:34.184301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.289 [2024-07-25 05:15:34.276393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.289 Running I/O for 10 seconds... 00:09:30.546 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.546 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:30.546 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:30.546 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.546 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.546 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.546 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:30.546 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:30.546 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:30.546 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:30.546 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:30.546 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:30.546 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:30.546 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:30.546 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:30.546 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:30.546 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.547 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.547 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.547 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:09:30.547 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:09:30.547 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:30.806 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:30.806 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:30.806 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:30.806 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.806 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.806 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:30.806 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.806 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=450 00:09:30.806 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 450 -ge 100 ']' 00:09:30.806 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:30.806 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:30.806 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:30.806 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:30.806 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.806 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.806 [2024-07-25 05:15:34.886601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab4310 is same with the state(5) to be set 00:09:30.806 [2024-07-25 05:15:34.886677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab4310 is same with the state(5) to be set 00:09:30.806 [2024-07-25 05:15:34.886700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab4310 is same with the state(5) to be set 00:09:30.806 [2024-07-25 05:15:34.886717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab4310 is same with the state(5) to be set 00:09:30.806 [2024-07-25 05:15:34.887847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:30.806 [2024-07-25 05:15:34.887898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.887917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:30.807 [2024-07-25 05:15:34.887929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.887945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:30.807 [2024-07-25 05:15:34.887981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.888002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:30.807 [2024-07-25 05:15:34.888016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.888028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9af0 is same with the state(5) to be set 00:09:30.807 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.807 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:30.807 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.807 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.807 task offset: 65536 on job bdev=Nvme0n1 fails 00:09:30.807 00:09:30.807 Latency(us) 00:09:30.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.807 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:30.807 Job: Nvme0n1 ended in about 0.47 seconds with error 00:09:30.807 Verification LBA range: start 0x0 length 0x400 00:09:30.807 Nvme0n1 : 0.47 1091.35 68.21 136.42 0.00 50100.66 2651.23 50998.92 00:09:30.807 =================================================================================================================== 00:09:30.807 Total : 1091.35 68.21 136.42 0.00 50100.66 2651.23 50998.92 00:09:30.807 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.807 05:15:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:30.807 [2024-07-25 05:15:34.899413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d9af0 (9): Bad file descriptor 00:09:30.807 [2024-07-25 05:15:34.899533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.899555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.899590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.899607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.899629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.899642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.899656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.899667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.899681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.899693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.899706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.899718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.899732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.899749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.899770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.899782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.899800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.899814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.899828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.899840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.899854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.899865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.899878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.899892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.899906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.899921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.899940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.899973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.899990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.900002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.900016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.900027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.900041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.900052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.900068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.900087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.900110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.900126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.900140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.900152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.900165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.900177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.900190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.900202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.900215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.900226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.900244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.900265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.900286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.900307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.900322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.900334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.900347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.900359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.900372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.900384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.807 [2024-07-25 05:15:34.900397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.807 [2024-07-25 05:15:34.900409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.900975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.900988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.901002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.901024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.901048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.901066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.901087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.901106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.901125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.901138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.901157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.901169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.901182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.901198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.901219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.901239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.901258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.901271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.901285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.901296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.901310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.901321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.901335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.901346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.901359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.901371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.901384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.901396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.901410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.901432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.901453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.901467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.901480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.808 [2024-07-25 05:15:34.901492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.808 [2024-07-25 05:15:34.901576] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14d9820 was disconnected and freed. reset controller. 00:09:30.808 [2024-07-25 05:15:34.903107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:30.809 [2024-07-25 05:15:34.905809] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:30.809 [2024-07-25 05:15:34.910202] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:31.802 05:15:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 67280 00:09:31.802 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (67280) - No such process 00:09:31.802 05:15:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:31.802 05:15:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:31.802 05:15:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:31.802 05:15:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:31.802 05:15:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:31.802 05:15:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:31.802 05:15:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:31.802 05:15:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:31.802 { 00:09:31.802 "params": { 00:09:31.802 "name": "Nvme$subsystem", 00:09:31.802 "trtype": "$TEST_TRANSPORT", 00:09:31.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:31.802 "adrfam": "ipv4", 00:09:31.802 "trsvcid": "$NVMF_PORT", 00:09:31.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:31.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:31.802 "hdgst": ${hdgst:-false}, 00:09:31.802 "ddgst": ${ddgst:-false} 00:09:31.802 }, 00:09:31.802 "method": "bdev_nvme_attach_controller" 00:09:31.802 } 00:09:31.802 EOF 00:09:31.802 )") 00:09:31.802 05:15:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:31.802 05:15:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:31.802 05:15:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:31.802 05:15:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:31.802 "params": { 00:09:31.802 "name": "Nvme0", 00:09:31.802 "trtype": "tcp", 00:09:31.802 "traddr": "10.0.0.2", 00:09:31.802 "adrfam": "ipv4", 00:09:31.802 "trsvcid": "4420", 00:09:31.802 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:31.802 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:31.802 "hdgst": false, 00:09:31.802 "ddgst": false 00:09:31.802 }, 00:09:31.802 "method": "bdev_nvme_attach_controller" 00:09:31.802 }' 00:09:31.802 [2024-07-25 05:15:35.975067] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:09:31.802 [2024-07-25 05:15:35.975196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67326 ] 00:09:32.059 [2024-07-25 05:15:36.144246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.059 [2024-07-25 05:15:36.230859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.317 Running I/O for 1 seconds... 00:09:33.249 00:09:33.249 Latency(us) 00:09:33.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.249 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:33.249 Verification LBA range: start 0x0 length 0x400 00:09:33.249 Nvme0n1 : 1.01 895.16 55.95 0.00 0.00 68104.29 1020.28 83409.45 00:09:33.249 =================================================================================================================== 00:09:33.249 Total : 895.16 55.95 0.00 0.00 68104.29 1020.28 83409.45 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:33.508 rmmod nvme_tcp 00:09:33.508 rmmod nvme_fabrics 00:09:33.508 rmmod nvme_keyring 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 67227 ']' 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 67227 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 67227 ']' 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 67227 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:33.508 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67227 00:09:33.766 killing process with pid 67227 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67227' 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 67227 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 67227 00:09:33.766 [2024-07-25 05:15:37.835583] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:33.766 00:09:33.766 real 0m4.851s 00:09:33.766 user 0m18.850s 00:09:33.766 sys 0m1.093s 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:33.766 ************************************ 00:09:33.766 END TEST nvmf_host_management 00:09:33.766 ************************************ 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.766 05:15:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.024 ************************************ 00:09:34.024 START TEST nvmf_lvol 00:09:34.024 ************************************ 00:09:34.024 05:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:34.024 * Looking for test storage... 00:09:34.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:34.024 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:34.024 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:34.024 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.024 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.024 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.024 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:34.025 Cannot find device "nvmf_tgt_br" 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:34.025 Cannot find device "nvmf_tgt_br2" 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:34.025 Cannot find device "nvmf_tgt_br" 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:34.025 Cannot find device "nvmf_tgt_br2" 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:34.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:34.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:34.025 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:34.283 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:34.283 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:34.283 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:34.283 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:34.283 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:34.283 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:34.283 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:34.283 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:34.283 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:34.283 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:34.283 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:34.283 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:34.283 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:34.283 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:34.283 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:34.283 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:34.283 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:34.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:09:34.284 00:09:34.284 --- 10.0.0.2 ping statistics --- 00:09:34.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.284 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:34.284 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:34.284 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:09:34.284 00:09:34.284 --- 10.0.0.3 ping statistics --- 00:09:34.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.284 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:34.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:34.284 00:09:34.284 --- 10.0.0.1 ping statistics --- 00:09:34.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.284 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=67537 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 67537 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 67537 ']' 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.284 05:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:34.284 [2024-07-25 05:15:38.448894] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:09:34.284 [2024-07-25 05:15:38.449019] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.575 [2024-07-25 05:15:38.594697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:34.575 [2024-07-25 05:15:38.662250] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.575 [2024-07-25 05:15:38.662311] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.575 [2024-07-25 05:15:38.662325] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.575 [2024-07-25 05:15:38.662336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.575 [2024-07-25 05:15:38.662344] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.575 [2024-07-25 05:15:38.662482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.575 [2024-07-25 05:15:38.662723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.575 [2024-07-25 05:15:38.662731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.534 05:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.534 05:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:35.534 05:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:35.534 05:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:35.534 05:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:35.534 05:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.534 05:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:35.791 [2024-07-25 05:15:39.763293] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.791 05:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.048 05:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:36.048 05:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.305 05:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:36.305 05:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:36.562 05:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:36.819 05:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a506e88c-af98-4cc3-8c3f-47fc0880ba2d 00:09:36.819 05:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a506e88c-af98-4cc3-8c3f-47fc0880ba2d lvol 20 00:09:37.384 05:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=836e71f8-be10-4873-b798-26b4615e276c 00:09:37.384 05:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:37.698 05:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 836e71f8-be10-4873-b798-26b4615e276c 00:09:37.956 05:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:38.215 [2024-07-25 05:15:42.280005] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.215 05:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:38.473 05:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=67690 00:09:38.473 05:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:38.473 05:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:39.404 05:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 836e71f8-be10-4873-b798-26b4615e276c MY_SNAPSHOT 00:09:39.970 05:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=910ede93-be1f-434d-a05f-7a35ef6abdde 00:09:39.970 05:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 836e71f8-be10-4873-b798-26b4615e276c 30 00:09:40.229 05:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 910ede93-be1f-434d-a05f-7a35ef6abdde MY_CLONE 00:09:40.487 05:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1a235f00-1bf5-4735-841e-54e717e8cd91 00:09:40.487 05:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 1a235f00-1bf5-4735-841e-54e717e8cd91 00:09:41.052 05:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 67690 00:09:49.159 Initializing NVMe Controllers 00:09:49.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:49.159 Controller IO queue size 128, less than required. 00:09:49.159 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:49.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:49.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:49.159 Initialization complete. Launching workers. 00:09:49.160 ======================================================== 00:09:49.160 Latency(us) 00:09:49.160 Device Information : IOPS MiB/s Average min max 00:09:49.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8629.26 33.71 14844.17 521.49 98228.60 00:09:49.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9694.21 37.87 13204.41 4848.11 102964.00 00:09:49.160 ======================================================== 00:09:49.160 Total : 18323.47 71.58 13976.64 521.49 102964.00 00:09:49.160 00:09:49.160 05:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:49.160 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 836e71f8-be10-4873-b798-26b4615e276c 00:09:49.416 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a506e88c-af98-4cc3-8c3f-47fc0880ba2d 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:49.674 rmmod nvme_tcp 00:09:49.674 rmmod nvme_fabrics 00:09:49.674 rmmod nvme_keyring 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 67537 ']' 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 67537 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 67537 ']' 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 67537 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67537 00:09:49.674 killing process with pid 67537 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67537' 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 67537 00:09:49.674 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 67537 00:09:49.932 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:49.932 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:49.932 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:49.932 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:49.932 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:49.932 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.932 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.932 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.932 05:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:49.932 00:09:49.932 real 0m16.059s 00:09:49.932 user 1m7.198s 00:09:49.932 sys 0m3.879s 00:09:49.932 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.932 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:49.932 ************************************ 00:09:49.932 END TEST nvmf_lvol 00:09:49.932 ************************************ 00:09:49.932 05:15:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:49.932 05:15:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:49.932 05:15:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.932 05:15:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:49.932 ************************************ 00:09:49.932 START TEST nvmf_lvs_grow 00:09:49.932 ************************************ 00:09:49.932 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:50.191 * Looking for test storage... 00:09:50.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:50.191 Cannot find device "nvmf_tgt_br" 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:50.191 Cannot find device "nvmf_tgt_br2" 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:50.191 Cannot find device "nvmf_tgt_br" 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:50.191 Cannot find device "nvmf_tgt_br2" 00:09:50.191 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:50.192 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:50.192 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:50.192 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:50.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:09:50.450 00:09:50.450 --- 10.0.0.2 ping statistics --- 00:09:50.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.450 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:50.450 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:50.450 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:09:50.450 00:09:50.450 --- 10.0.0.3 ping statistics --- 00:09:50.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.450 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:50.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:09:50.450 00:09:50.450 --- 10.0.0.1 ping statistics --- 00:09:50.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.450 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:50.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=68062 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 68062 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 68062 ']' 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:50.450 05:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:50.450 [2024-07-25 05:15:54.555508] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:09:50.450 [2024-07-25 05:15:54.555606] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.709 [2024-07-25 05:15:54.688639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.709 [2024-07-25 05:15:54.749675] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.709 [2024-07-25 05:15:54.749727] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.709 [2024-07-25 05:15:54.749739] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.709 [2024-07-25 05:15:54.749747] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.709 [2024-07-25 05:15:54.749754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.709 [2024-07-25 05:15:54.749781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.643 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.643 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:51.643 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.643 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:51.643 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:51.643 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.643 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:51.902 [2024-07-25 05:15:55.819308] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.902 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:51.902 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:51.902 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.902 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:51.902 ************************************ 00:09:51.902 START TEST lvs_grow_clean 00:09:51.902 ************************************ 00:09:51.902 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:51.902 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:51.902 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:51.902 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:51.902 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:51.902 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:51.902 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:51.902 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:51.902 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:51.902 05:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:52.161 05:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:52.161 05:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:52.419 05:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5ab3365e-2cd2-45a2-b94f-a4b8a3be4f1a 00:09:52.419 05:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab3365e-2cd2-45a2-b94f-a4b8a3be4f1a 00:09:52.419 05:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:52.677 05:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:52.677 05:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:52.677 05:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5ab3365e-2cd2-45a2-b94f-a4b8a3be4f1a lvol 150 00:09:52.936 05:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f6b15235-389b-47d9-bc05-7534d3142405 00:09:52.936 05:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:52.936 05:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:53.195 [2024-07-25 05:15:57.306830] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:53.196 [2024-07-25 05:15:57.306915] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:53.196 true 00:09:53.196 05:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:53.196 05:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab3365e-2cd2-45a2-b94f-a4b8a3be4f1a 00:09:53.454 05:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:53.454 05:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:54.021 05:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f6b15235-389b-47d9-bc05-7534d3142405 00:09:54.278 05:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:54.537 [2024-07-25 05:15:58.487450] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.537 05:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:54.823 05:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68229 00:09:54.823 05:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:54.823 05:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:54.823 05:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68229 /var/tmp/bdevperf.sock 00:09:54.823 05:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 68229 ']' 00:09:54.823 05:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:54.823 05:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:54.823 05:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:54.823 05:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.823 05:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:54.823 [2024-07-25 05:15:58.874695] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:09:54.823 [2024-07-25 05:15:58.874810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68229 ] 00:09:55.081 [2024-07-25 05:15:59.013794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.081 [2024-07-25 05:15:59.074394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.014 05:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:56.014 05:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:56.014 05:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:56.272 Nvme0n1 00:09:56.272 05:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:56.532 [ 00:09:56.532 { 00:09:56.532 "aliases": [ 00:09:56.532 "f6b15235-389b-47d9-bc05-7534d3142405" 00:09:56.532 ], 00:09:56.532 "assigned_rate_limits": { 00:09:56.532 "r_mbytes_per_sec": 0, 00:09:56.532 "rw_ios_per_sec": 0, 00:09:56.532 "rw_mbytes_per_sec": 0, 00:09:56.532 "w_mbytes_per_sec": 0 00:09:56.532 }, 00:09:56.532 "block_size": 4096, 00:09:56.532 "claimed": false, 00:09:56.532 "driver_specific": { 00:09:56.532 "mp_policy": "active_passive", 00:09:56.532 "nvme": [ 00:09:56.532 { 00:09:56.532 "ctrlr_data": { 00:09:56.532 "ana_reporting": false, 00:09:56.532 "cntlid": 1, 00:09:56.532 "firmware_revision": "24.09", 00:09:56.532 "model_number": "SPDK bdev Controller", 00:09:56.532 "multi_ctrlr": true, 00:09:56.532 "oacs": { 00:09:56.532 "firmware": 0, 00:09:56.532 "format": 0, 00:09:56.532 "ns_manage": 0, 00:09:56.532 "security": 0 00:09:56.532 }, 00:09:56.532 "serial_number": "SPDK0", 00:09:56.532 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:56.532 "vendor_id": "0x8086" 00:09:56.532 }, 00:09:56.532 "ns_data": { 00:09:56.532 "can_share": true, 00:09:56.532 "id": 1 00:09:56.532 }, 00:09:56.532 "trid": { 00:09:56.532 "adrfam": "IPv4", 00:09:56.532 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:56.532 "traddr": "10.0.0.2", 00:09:56.532 "trsvcid": "4420", 00:09:56.532 "trtype": "TCP" 00:09:56.532 }, 00:09:56.532 "vs": { 00:09:56.532 "nvme_version": "1.3" 00:09:56.532 } 00:09:56.532 } 00:09:56.532 ] 00:09:56.532 }, 00:09:56.532 "memory_domains": [ 00:09:56.532 { 00:09:56.532 "dma_device_id": "system", 00:09:56.532 "dma_device_type": 1 00:09:56.532 } 00:09:56.532 ], 00:09:56.532 "name": "Nvme0n1", 00:09:56.532 "num_blocks": 38912, 00:09:56.532 "product_name": "NVMe disk", 00:09:56.532 "supported_io_types": { 00:09:56.532 "abort": true, 00:09:56.532 "compare": true, 00:09:56.532 "compare_and_write": true, 00:09:56.532 "copy": true, 00:09:56.532 "flush": true, 00:09:56.532 "get_zone_info": false, 00:09:56.532 "nvme_admin": true, 00:09:56.532 "nvme_io": true, 00:09:56.532 "nvme_io_md": false, 00:09:56.532 "nvme_iov_md": false, 00:09:56.532 "read": true, 00:09:56.532 "reset": true, 00:09:56.532 "seek_data": false, 00:09:56.532 "seek_hole": false, 00:09:56.532 "unmap": true, 00:09:56.532 "write": true, 00:09:56.532 "write_zeroes": true, 00:09:56.532 "zcopy": false, 00:09:56.532 "zone_append": false, 00:09:56.532 "zone_management": false 00:09:56.532 }, 00:09:56.532 "uuid": "f6b15235-389b-47d9-bc05-7534d3142405", 00:09:56.532 "zoned": false 00:09:56.532 } 00:09:56.532 ] 00:09:56.532 05:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68276 00:09:56.532 05:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:56.532 05:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:56.532 Running I/O for 10 seconds... 00:09:57.467 Latency(us) 00:09:57.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.467 Nvme0n1 : 1.00 7661.00 29.93 0.00 0.00 0.00 0.00 0.00 00:09:57.467 =================================================================================================================== 00:09:57.467 Total : 7661.00 29.93 0.00 0.00 0.00 0.00 0.00 00:09:57.467 00:09:58.402 05:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5ab3365e-2cd2-45a2-b94f-a4b8a3be4f1a 00:09:58.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.660 Nvme0n1 : 2.00 7672.50 29.97 0.00 0.00 0.00 0.00 0.00 00:09:58.660 =================================================================================================================== 00:09:58.660 Total : 7672.50 29.97 0.00 0.00 0.00 0.00 0.00 00:09:58.660 00:09:58.660 true 00:09:58.660 05:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab3365e-2cd2-45a2-b94f-a4b8a3be4f1a 00:09:58.660 05:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:59.226 05:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:59.226 05:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:59.226 05:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 68276 00:09:59.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.484 Nvme0n1 : 3.00 7743.33 30.25 0.00 0.00 0.00 0.00 0.00 00:09:59.484 =================================================================================================================== 00:09:59.484 Total : 7743.33 30.25 0.00 0.00 0.00 0.00 0.00 00:09:59.484 00:10:00.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.418 Nvme0n1 : 4.00 7730.50 30.20 0.00 0.00 0.00 0.00 0.00 00:10:00.418 =================================================================================================================== 00:10:00.418 Total : 7730.50 30.20 0.00 0.00 0.00 0.00 0.00 00:10:00.418 00:10:01.793 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:01.793 Nvme0n1 : 5.00 7725.20 30.18 0.00 0.00 0.00 0.00 0.00 00:10:01.793 =================================================================================================================== 00:10:01.793 Total : 7725.20 30.18 0.00 0.00 0.00 0.00 0.00 00:10:01.793 00:10:02.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:02.727 Nvme0n1 : 6.00 7710.67 30.12 0.00 0.00 0.00 0.00 0.00 00:10:02.727 =================================================================================================================== 00:10:02.727 Total : 7710.67 30.12 0.00 0.00 0.00 0.00 0.00 00:10:02.727 00:10:03.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.661 Nvme0n1 : 7.00 7690.00 30.04 0.00 0.00 0.00 0.00 0.00 00:10:03.661 =================================================================================================================== 00:10:03.661 Total : 7690.00 30.04 0.00 0.00 0.00 0.00 0.00 00:10:03.661 00:10:04.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:04.598 Nvme0n1 : 8.00 7664.75 29.94 0.00 0.00 0.00 0.00 0.00 00:10:04.598 =================================================================================================================== 00:10:04.598 Total : 7664.75 29.94 0.00 0.00 0.00 0.00 0.00 00:10:04.598 00:10:05.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.562 Nvme0n1 : 9.00 7639.22 29.84 0.00 0.00 0.00 0.00 0.00 00:10:05.562 =================================================================================================================== 00:10:05.562 Total : 7639.22 29.84 0.00 0.00 0.00 0.00 0.00 00:10:05.562 00:10:06.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:06.497 Nvme0n1 : 10.00 7632.10 29.81 0.00 0.00 0.00 0.00 0.00 00:10:06.497 =================================================================================================================== 00:10:06.497 Total : 7632.10 29.81 0.00 0.00 0.00 0.00 0.00 00:10:06.497 00:10:06.497 00:10:06.497 Latency(us) 00:10:06.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:06.497 Nvme0n1 : 10.01 7635.74 29.83 0.00 0.00 16751.79 8043.05 44326.17 00:10:06.497 =================================================================================================================== 00:10:06.497 Total : 7635.74 29.83 0.00 0.00 16751.79 8043.05 44326.17 00:10:06.497 0 00:10:06.497 05:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68229 00:10:06.497 05:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 68229 ']' 00:10:06.497 05:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 68229 00:10:06.497 05:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:10:06.497 05:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:06.497 05:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68229 00:10:06.497 05:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:06.497 killing process with pid 68229 00:10:06.497 Received shutdown signal, test time was about 10.000000 seconds 00:10:06.497 00:10:06.497 Latency(us) 00:10:06.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.497 =================================================================================================================== 00:10:06.497 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:06.497 05:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:06.497 05:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68229' 00:10:06.497 05:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 68229 00:10:06.497 05:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 68229 00:10:06.756 05:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:07.014 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:07.272 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:07.272 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab3365e-2cd2-45a2-b94f-a4b8a3be4f1a 00:10:07.558 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:07.558 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:07.558 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:07.817 [2024-07-25 05:16:11.909627] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:07.817 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab3365e-2cd2-45a2-b94f-a4b8a3be4f1a 00:10:07.817 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:07.817 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab3365e-2cd2-45a2-b94f-a4b8a3be4f1a 00:10:07.817 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.817 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:07.817 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.817 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:07.817 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.817 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:07.817 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.817 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:07.817 05:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab3365e-2cd2-45a2-b94f-a4b8a3be4f1a 00:10:08.076 2024/07/25 05:16:12 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:5ab3365e-2cd2-45a2-b94f-a4b8a3be4f1a], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:08.076 request: 00:10:08.076 { 00:10:08.076 "method": "bdev_lvol_get_lvstores", 00:10:08.076 "params": { 00:10:08.076 "uuid": "5ab3365e-2cd2-45a2-b94f-a4b8a3be4f1a" 00:10:08.076 } 00:10:08.076 } 00:10:08.076 Got JSON-RPC error response 00:10:08.076 GoRPCClient: error on JSON-RPC call 00:10:08.076 05:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:08.076 05:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:08.076 05:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:08.076 05:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:08.076 05:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:08.335 aio_bdev 00:10:08.335 05:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f6b15235-389b-47d9-bc05-7534d3142405 00:10:08.335 05:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=f6b15235-389b-47d9-bc05-7534d3142405 00:10:08.335 05:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.335 05:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:10:08.335 05:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.335 05:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.335 05:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:08.593 05:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f6b15235-389b-47d9-bc05-7534d3142405 -t 2000 00:10:08.853 [ 00:10:08.853 { 00:10:08.853 "aliases": [ 00:10:08.853 "lvs/lvol" 00:10:08.853 ], 00:10:08.853 "assigned_rate_limits": { 00:10:08.853 "r_mbytes_per_sec": 0, 00:10:08.853 "rw_ios_per_sec": 0, 00:10:08.853 "rw_mbytes_per_sec": 0, 00:10:08.853 "w_mbytes_per_sec": 0 00:10:08.853 }, 00:10:08.853 "block_size": 4096, 00:10:08.853 "claimed": false, 00:10:08.853 "driver_specific": { 00:10:08.853 "lvol": { 00:10:08.853 "base_bdev": "aio_bdev", 00:10:08.853 "clone": false, 00:10:08.853 "esnap_clone": false, 00:10:08.853 "lvol_store_uuid": "5ab3365e-2cd2-45a2-b94f-a4b8a3be4f1a", 00:10:08.853 "num_allocated_clusters": 38, 00:10:08.853 "snapshot": false, 00:10:08.853 "thin_provision": false 00:10:08.853 } 00:10:08.853 }, 00:10:08.853 "name": "f6b15235-389b-47d9-bc05-7534d3142405", 00:10:08.853 "num_blocks": 38912, 00:10:08.853 "product_name": "Logical Volume", 00:10:08.853 "supported_io_types": { 00:10:08.853 "abort": false, 00:10:08.853 "compare": false, 00:10:08.853 "compare_and_write": false, 00:10:08.853 "copy": false, 00:10:08.853 "flush": false, 00:10:08.853 "get_zone_info": false, 00:10:08.853 "nvme_admin": false, 00:10:08.853 "nvme_io": false, 00:10:08.853 "nvme_io_md": false, 00:10:08.853 "nvme_iov_md": false, 00:10:08.853 "read": true, 00:10:08.853 "reset": true, 00:10:08.853 "seek_data": true, 00:10:08.853 "seek_hole": true, 00:10:08.853 "unmap": true, 00:10:08.853 "write": true, 00:10:08.853 "write_zeroes": true, 00:10:08.853 "zcopy": false, 00:10:08.853 "zone_append": false, 00:10:08.853 "zone_management": false 00:10:08.853 }, 00:10:08.853 "uuid": "f6b15235-389b-47d9-bc05-7534d3142405", 00:10:08.853 "zoned": false 00:10:08.853 } 00:10:08.853 ] 00:10:08.853 05:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:10:08.853 05:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab3365e-2cd2-45a2-b94f-a4b8a3be4f1a 00:10:08.853 05:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:09.112 05:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:09.112 05:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab3365e-2cd2-45a2-b94f-a4b8a3be4f1a 00:10:09.112 05:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:09.379 05:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:09.379 05:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f6b15235-389b-47d9-bc05-7534d3142405 00:10:09.635 05:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5ab3365e-2cd2-45a2-b94f-a4b8a3be4f1a 00:10:09.894 05:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:10.152 05:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:10.719 ************************************ 00:10:10.719 END TEST lvs_grow_clean 00:10:10.719 ************************************ 00:10:10.719 00:10:10.719 real 0m18.798s 00:10:10.719 user 0m18.351s 00:10:10.719 sys 0m2.093s 00:10:10.719 05:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.719 05:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:10.719 05:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:10.719 05:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:10.719 05:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.719 05:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:10.719 ************************************ 00:10:10.719 START TEST lvs_grow_dirty 00:10:10.719 ************************************ 00:10:10.719 05:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:10:10.719 05:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:10.719 05:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:10.719 05:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:10.719 05:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:10.719 05:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:10.719 05:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:10.719 05:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:10.719 05:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:10.719 05:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:10.990 05:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:10.990 05:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:11.249 05:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5b71c9e2-8817-4c92-96ad-1758f8b8b5e1 00:10:11.249 05:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b71c9e2-8817-4c92-96ad-1758f8b8b5e1 00:10:11.249 05:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:11.507 05:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:11.507 05:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:11.507 05:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5b71c9e2-8817-4c92-96ad-1758f8b8b5e1 lvol 150 00:10:11.765 05:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=257a7b4e-eaf6-4e3e-a73f-2781a2b0c69a 00:10:11.765 05:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:11.765 05:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:12.023 [2024-07-25 05:16:16.170854] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:12.023 [2024-07-25 05:16:16.170936] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:12.023 true 00:10:12.023 05:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b71c9e2-8817-4c92-96ad-1758f8b8b5e1 00:10:12.023 05:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:12.280 05:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:12.280 05:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:12.846 05:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 257a7b4e-eaf6-4e3e-a73f-2781a2b0c69a 00:10:12.846 05:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:13.103 [2024-07-25 05:16:17.247441] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.103 05:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:13.671 05:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68673 00:10:13.671 05:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:13.671 05:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:13.671 05:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68673 /var/tmp/bdevperf.sock 00:10:13.671 05:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 68673 ']' 00:10:13.671 05:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:13.671 05:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.671 05:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:13.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:13.671 05:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.671 05:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:13.671 [2024-07-25 05:16:17.634784] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:10:13.671 [2024-07-25 05:16:17.634900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68673 ] 00:10:13.671 [2024-07-25 05:16:17.777177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.939 [2024-07-25 05:16:17.882603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.505 05:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.505 05:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:14.505 05:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:15.071 Nvme0n1 00:10:15.071 05:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:15.329 [ 00:10:15.329 { 00:10:15.329 "aliases": [ 00:10:15.329 "257a7b4e-eaf6-4e3e-a73f-2781a2b0c69a" 00:10:15.329 ], 00:10:15.329 "assigned_rate_limits": { 00:10:15.329 "r_mbytes_per_sec": 0, 00:10:15.329 "rw_ios_per_sec": 0, 00:10:15.329 "rw_mbytes_per_sec": 0, 00:10:15.329 "w_mbytes_per_sec": 0 00:10:15.329 }, 00:10:15.329 "block_size": 4096, 00:10:15.329 "claimed": false, 00:10:15.329 "driver_specific": { 00:10:15.329 "mp_policy": "active_passive", 00:10:15.329 "nvme": [ 00:10:15.329 { 00:10:15.329 "ctrlr_data": { 00:10:15.329 "ana_reporting": false, 00:10:15.329 "cntlid": 1, 00:10:15.329 "firmware_revision": "24.09", 00:10:15.329 "model_number": "SPDK bdev Controller", 00:10:15.329 "multi_ctrlr": true, 00:10:15.329 "oacs": { 00:10:15.329 "firmware": 0, 00:10:15.329 "format": 0, 00:10:15.329 "ns_manage": 0, 00:10:15.329 "security": 0 00:10:15.329 }, 00:10:15.329 "serial_number": "SPDK0", 00:10:15.329 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:15.329 "vendor_id": "0x8086" 00:10:15.329 }, 00:10:15.329 "ns_data": { 00:10:15.329 "can_share": true, 00:10:15.329 "id": 1 00:10:15.329 }, 00:10:15.329 "trid": { 00:10:15.329 "adrfam": "IPv4", 00:10:15.329 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:15.329 "traddr": "10.0.0.2", 00:10:15.329 "trsvcid": "4420", 00:10:15.329 "trtype": "TCP" 00:10:15.329 }, 00:10:15.329 "vs": { 00:10:15.329 "nvme_version": "1.3" 00:10:15.329 } 00:10:15.329 } 00:10:15.329 ] 00:10:15.329 }, 00:10:15.329 "memory_domains": [ 00:10:15.329 { 00:10:15.329 "dma_device_id": "system", 00:10:15.329 "dma_device_type": 1 00:10:15.329 } 00:10:15.329 ], 00:10:15.329 "name": "Nvme0n1", 00:10:15.329 "num_blocks": 38912, 00:10:15.329 "product_name": "NVMe disk", 00:10:15.329 "supported_io_types": { 00:10:15.329 "abort": true, 00:10:15.329 "compare": true, 00:10:15.329 "compare_and_write": true, 00:10:15.329 "copy": true, 00:10:15.329 "flush": true, 00:10:15.329 "get_zone_info": false, 00:10:15.329 "nvme_admin": true, 00:10:15.329 "nvme_io": true, 00:10:15.329 "nvme_io_md": false, 00:10:15.329 "nvme_iov_md": false, 00:10:15.329 "read": true, 00:10:15.329 "reset": true, 00:10:15.329 "seek_data": false, 00:10:15.329 "seek_hole": false, 00:10:15.329 "unmap": true, 00:10:15.329 "write": true, 00:10:15.329 "write_zeroes": true, 00:10:15.329 "zcopy": false, 00:10:15.329 "zone_append": false, 00:10:15.329 "zone_management": false 00:10:15.329 }, 00:10:15.329 "uuid": "257a7b4e-eaf6-4e3e-a73f-2781a2b0c69a", 00:10:15.329 "zoned": false 00:10:15.329 } 00:10:15.329 ] 00:10:15.329 05:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68726 00:10:15.329 05:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:15.329 05:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:15.329 Running I/O for 10 seconds... 00:10:16.703 Latency(us) 00:10:16.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.703 Nvme0n1 : 1.00 7558.00 29.52 0.00 0.00 0.00 0.00 0.00 00:10:16.703 =================================================================================================================== 00:10:16.704 Total : 7558.00 29.52 0.00 0.00 0.00 0.00 0.00 00:10:16.704 00:10:17.270 05:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5b71c9e2-8817-4c92-96ad-1758f8b8b5e1 00:10:17.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.528 Nvme0n1 : 2.00 7630.00 29.80 0.00 0.00 0.00 0.00 0.00 00:10:17.528 =================================================================================================================== 00:10:17.528 Total : 7630.00 29.80 0.00 0.00 0.00 0.00 0.00 00:10:17.528 00:10:17.528 true 00:10:17.786 05:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:17.786 05:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b71c9e2-8817-4c92-96ad-1758f8b8b5e1 00:10:18.043 05:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:18.043 05:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:18.043 05:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 68726 00:10:18.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.610 Nvme0n1 : 3.00 7724.33 30.17 0.00 0.00 0.00 0.00 0.00 00:10:18.610 =================================================================================================================== 00:10:18.610 Total : 7724.33 30.17 0.00 0.00 0.00 0.00 0.00 00:10:18.610 00:10:19.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.557 Nvme0n1 : 4.00 7737.75 30.23 0.00 0.00 0.00 0.00 0.00 00:10:19.557 =================================================================================================================== 00:10:19.557 Total : 7737.75 30.23 0.00 0.00 0.00 0.00 0.00 00:10:19.557 00:10:20.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.512 Nvme0n1 : 5.00 7697.00 30.07 0.00 0.00 0.00 0.00 0.00 00:10:20.512 =================================================================================================================== 00:10:20.512 Total : 7697.00 30.07 0.00 0.00 0.00 0.00 0.00 00:10:20.512 00:10:21.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.446 Nvme0n1 : 6.00 7595.00 29.67 0.00 0.00 0.00 0.00 0.00 00:10:21.446 =================================================================================================================== 00:10:21.446 Total : 7595.00 29.67 0.00 0.00 0.00 0.00 0.00 00:10:21.446 00:10:22.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.380 Nvme0n1 : 7.00 7499.14 29.29 0.00 0.00 0.00 0.00 0.00 00:10:22.380 =================================================================================================================== 00:10:22.380 Total : 7499.14 29.29 0.00 0.00 0.00 0.00 0.00 00:10:22.380 00:10:23.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.313 Nvme0n1 : 8.00 7460.75 29.14 0.00 0.00 0.00 0.00 0.00 00:10:23.313 =================================================================================================================== 00:10:23.313 Total : 7460.75 29.14 0.00 0.00 0.00 0.00 0.00 00:10:23.313 00:10:24.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:24.687 Nvme0n1 : 9.00 7471.67 29.19 0.00 0.00 0.00 0.00 0.00 00:10:24.687 =================================================================================================================== 00:10:24.687 Total : 7471.67 29.19 0.00 0.00 0.00 0.00 0.00 00:10:24.687 00:10:25.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.628 Nvme0n1 : 10.00 7376.90 28.82 0.00 0.00 0.00 0.00 0.00 00:10:25.628 =================================================================================================================== 00:10:25.628 Total : 7376.90 28.82 0.00 0.00 0.00 0.00 0.00 00:10:25.628 00:10:25.628 00:10:25.628 Latency(us) 00:10:25.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.628 Nvme0n1 : 10.02 7384.86 28.85 0.00 0.00 17321.68 7626.01 143940.89 00:10:25.628 =================================================================================================================== 00:10:25.628 Total : 7384.86 28.85 0.00 0.00 17321.68 7626.01 143940.89 00:10:25.628 0 00:10:25.628 05:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68673 00:10:25.628 05:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 68673 ']' 00:10:25.628 05:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 68673 00:10:25.628 05:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:10:25.628 05:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:25.628 05:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68673 00:10:25.628 killing process with pid 68673 00:10:25.628 Received shutdown signal, test time was about 10.000000 seconds 00:10:25.628 00:10:25.628 Latency(us) 00:10:25.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.628 =================================================================================================================== 00:10:25.628 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:25.628 05:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:25.628 05:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:25.628 05:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68673' 00:10:25.628 05:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 68673 00:10:25.628 05:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 68673 00:10:25.628 05:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:25.887 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b71c9e2-8817-4c92-96ad-1758f8b8b5e1 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 68062 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 68062 00:10:26.454 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 68062 Killed "${NVMF_APP[@]}" "$@" 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=68890 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 68890 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 68890 ']' 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:26.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:26.454 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:26.712 [2024-07-25 05:16:30.645119] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:10:26.712 [2024-07-25 05:16:30.645204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.712 [2024-07-25 05:16:30.784299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.712 [2024-07-25 05:16:30.852989] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.712 [2024-07-25 05:16:30.853042] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.712 [2024-07-25 05:16:30.853056] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.712 [2024-07-25 05:16:30.853067] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.712 [2024-07-25 05:16:30.853076] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.712 [2024-07-25 05:16:30.853123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.970 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:26.970 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:26.970 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:26.970 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:26.970 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:26.970 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.970 05:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:27.228 [2024-07-25 05:16:31.235711] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:27.228 [2024-07-25 05:16:31.235930] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:27.228 [2024-07-25 05:16:31.236070] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:27.228 05:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:27.228 05:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 257a7b4e-eaf6-4e3e-a73f-2781a2b0c69a 00:10:27.228 05:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=257a7b4e-eaf6-4e3e-a73f-2781a2b0c69a 00:10:27.228 05:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:27.228 05:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:27.228 05:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:27.228 05:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:27.228 05:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:27.486 05:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 257a7b4e-eaf6-4e3e-a73f-2781a2b0c69a -t 2000 00:10:27.744 [ 00:10:27.744 { 00:10:27.744 "aliases": [ 00:10:27.744 "lvs/lvol" 00:10:27.744 ], 00:10:27.744 "assigned_rate_limits": { 00:10:27.744 "r_mbytes_per_sec": 0, 00:10:27.744 "rw_ios_per_sec": 0, 00:10:27.744 "rw_mbytes_per_sec": 0, 00:10:27.744 "w_mbytes_per_sec": 0 00:10:27.744 }, 00:10:27.744 "block_size": 4096, 00:10:27.744 "claimed": false, 00:10:27.744 "driver_specific": { 00:10:27.744 "lvol": { 00:10:27.744 "base_bdev": "aio_bdev", 00:10:27.744 "clone": false, 00:10:27.745 "esnap_clone": false, 00:10:27.745 "lvol_store_uuid": "5b71c9e2-8817-4c92-96ad-1758f8b8b5e1", 00:10:27.745 "num_allocated_clusters": 38, 00:10:27.745 "snapshot": false, 00:10:27.745 "thin_provision": false 00:10:27.745 } 00:10:27.745 }, 00:10:27.745 "name": "257a7b4e-eaf6-4e3e-a73f-2781a2b0c69a", 00:10:27.745 "num_blocks": 38912, 00:10:27.745 "product_name": "Logical Volume", 00:10:27.745 "supported_io_types": { 00:10:27.745 "abort": false, 00:10:27.745 "compare": false, 00:10:27.745 "compare_and_write": false, 00:10:27.745 "copy": false, 00:10:27.745 "flush": false, 00:10:27.745 "get_zone_info": false, 00:10:27.745 "nvme_admin": false, 00:10:27.745 "nvme_io": false, 00:10:27.745 "nvme_io_md": false, 00:10:27.745 "nvme_iov_md": false, 00:10:27.745 "read": true, 00:10:27.745 "reset": true, 00:10:27.745 "seek_data": true, 00:10:27.745 "seek_hole": true, 00:10:27.745 "unmap": true, 00:10:27.745 "write": true, 00:10:27.745 "write_zeroes": true, 00:10:27.745 "zcopy": false, 00:10:27.745 "zone_append": false, 00:10:27.745 "zone_management": false 00:10:27.745 }, 00:10:27.745 "uuid": "257a7b4e-eaf6-4e3e-a73f-2781a2b0c69a", 00:10:27.745 "zoned": false 00:10:27.745 } 00:10:27.745 ] 00:10:27.745 05:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:27.745 05:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b71c9e2-8817-4c92-96ad-1758f8b8b5e1 00:10:27.745 05:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:28.003 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:28.003 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b71c9e2-8817-4c92-96ad-1758f8b8b5e1 00:10:28.003 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:28.261 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:28.261 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:28.519 [2024-07-25 05:16:32.549371] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:28.519 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b71c9e2-8817-4c92-96ad-1758f8b8b5e1 00:10:28.519 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:10:28.519 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b71c9e2-8817-4c92-96ad-1758f8b8b5e1 00:10:28.519 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.519 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:28.519 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.519 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:28.519 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.519 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:28.519 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.519 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:28.519 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b71c9e2-8817-4c92-96ad-1758f8b8b5e1 00:10:28.778 2024/07/25 05:16:32 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:5b71c9e2-8817-4c92-96ad-1758f8b8b5e1], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:28.778 request: 00:10:28.778 { 00:10:28.778 "method": "bdev_lvol_get_lvstores", 00:10:28.778 "params": { 00:10:28.778 "uuid": "5b71c9e2-8817-4c92-96ad-1758f8b8b5e1" 00:10:28.778 } 00:10:28.778 } 00:10:28.778 Got JSON-RPC error response 00:10:28.778 GoRPCClient: error on JSON-RPC call 00:10:28.778 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:10:28.778 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:28.778 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:28.778 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:28.778 05:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:29.036 aio_bdev 00:10:29.036 05:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 257a7b4e-eaf6-4e3e-a73f-2781a2b0c69a 00:10:29.036 05:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=257a7b4e-eaf6-4e3e-a73f-2781a2b0c69a 00:10:29.036 05:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.036 05:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:29.036 05:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.036 05:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.036 05:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:29.601 05:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 257a7b4e-eaf6-4e3e-a73f-2781a2b0c69a -t 2000 00:10:29.601 [ 00:10:29.601 { 00:10:29.601 "aliases": [ 00:10:29.601 "lvs/lvol" 00:10:29.601 ], 00:10:29.601 "assigned_rate_limits": { 00:10:29.601 "r_mbytes_per_sec": 0, 00:10:29.601 "rw_ios_per_sec": 0, 00:10:29.601 "rw_mbytes_per_sec": 0, 00:10:29.601 "w_mbytes_per_sec": 0 00:10:29.601 }, 00:10:29.601 "block_size": 4096, 00:10:29.601 "claimed": false, 00:10:29.601 "driver_specific": { 00:10:29.601 "lvol": { 00:10:29.601 "base_bdev": "aio_bdev", 00:10:29.601 "clone": false, 00:10:29.601 "esnap_clone": false, 00:10:29.601 "lvol_store_uuid": "5b71c9e2-8817-4c92-96ad-1758f8b8b5e1", 00:10:29.601 "num_allocated_clusters": 38, 00:10:29.601 "snapshot": false, 00:10:29.601 "thin_provision": false 00:10:29.601 } 00:10:29.601 }, 00:10:29.601 "name": "257a7b4e-eaf6-4e3e-a73f-2781a2b0c69a", 00:10:29.601 "num_blocks": 38912, 00:10:29.601 "product_name": "Logical Volume", 00:10:29.601 "supported_io_types": { 00:10:29.602 "abort": false, 00:10:29.602 "compare": false, 00:10:29.602 "compare_and_write": false, 00:10:29.602 "copy": false, 00:10:29.602 "flush": false, 00:10:29.602 "get_zone_info": false, 00:10:29.602 "nvme_admin": false, 00:10:29.602 "nvme_io": false, 00:10:29.602 "nvme_io_md": false, 00:10:29.602 "nvme_iov_md": false, 00:10:29.602 "read": true, 00:10:29.602 "reset": true, 00:10:29.602 "seek_data": true, 00:10:29.602 "seek_hole": true, 00:10:29.602 "unmap": true, 00:10:29.602 "write": true, 00:10:29.602 "write_zeroes": true, 00:10:29.602 "zcopy": false, 00:10:29.602 "zone_append": false, 00:10:29.602 "zone_management": false 00:10:29.602 }, 00:10:29.602 "uuid": "257a7b4e-eaf6-4e3e-a73f-2781a2b0c69a", 00:10:29.602 "zoned": false 00:10:29.602 } 00:10:29.602 ] 00:10:29.602 05:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:29.602 05:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:29.602 05:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b71c9e2-8817-4c92-96ad-1758f8b8b5e1 00:10:29.858 05:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:29.858 05:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b71c9e2-8817-4c92-96ad-1758f8b8b5e1 00:10:29.858 05:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:30.116 05:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:30.116 05:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 257a7b4e-eaf6-4e3e-a73f-2781a2b0c69a 00:10:30.373 05:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5b71c9e2-8817-4c92-96ad-1758f8b8b5e1 00:10:30.631 05:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:30.922 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:31.488 ************************************ 00:10:31.488 END TEST lvs_grow_dirty 00:10:31.488 ************************************ 00:10:31.488 00:10:31.488 real 0m20.735s 00:10:31.488 user 0m44.994s 00:10:31.488 sys 0m7.808s 00:10:31.488 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.488 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:31.488 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:31.488 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:10:31.488 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:10:31.488 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:10:31.488 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:31.488 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:10:31.488 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:10:31.488 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:10:31.488 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:31.488 nvmf_trace.0 00:10:31.488 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:10:31.488 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:31.488 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:31.488 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:31.746 rmmod nvme_tcp 00:10:31.746 rmmod nvme_fabrics 00:10:31.746 rmmod nvme_keyring 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 68890 ']' 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 68890 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 68890 ']' 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 68890 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68890 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68890' 00:10:31.746 killing process with pid 68890 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 68890 00:10:31.746 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 68890 00:10:32.007 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:32.007 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:32.007 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:32.007 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.007 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:32.007 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.007 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.007 05:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:32.007 00:10:32.007 real 0m41.969s 00:10:32.007 user 1m9.330s 00:10:32.007 sys 0m10.559s 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:32.007 ************************************ 00:10:32.007 END TEST nvmf_lvs_grow 00:10:32.007 ************************************ 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:32.007 ************************************ 00:10:32.007 START TEST nvmf_bdev_io_wait 00:10:32.007 ************************************ 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:32.007 * Looking for test storage... 00:10:32.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.007 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:32.008 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:32.267 Cannot find device "nvmf_tgt_br" 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:32.267 Cannot find device "nvmf_tgt_br2" 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:32.267 Cannot find device "nvmf_tgt_br" 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:32.267 Cannot find device "nvmf_tgt_br2" 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:32.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:32.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:32.267 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:32.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:10:32.526 00:10:32.526 --- 10.0.0.2 ping statistics --- 00:10:32.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.526 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:32.526 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:32.526 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:10:32.526 00:10:32.526 --- 10.0.0.3 ping statistics --- 00:10:32.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.526 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:32.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:10:32.526 00:10:32.526 --- 10.0.0.1 ping statistics --- 00:10:32.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.526 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=69294 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 69294 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 69294 ']' 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:32.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:32.526 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:32.527 [2024-07-25 05:16:36.578428] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:10:32.527 [2024-07-25 05:16:36.578570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.786 [2024-07-25 05:16:36.720227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.786 [2024-07-25 05:16:36.791262] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.786 [2024-07-25 05:16:36.791318] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.786 [2024-07-25 05:16:36.791332] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.786 [2024-07-25 05:16:36.791342] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.786 [2024-07-25 05:16:36.791351] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.786 [2024-07-25 05:16:36.791535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.786 [2024-07-25 05:16:36.791670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.786 [2024-07-25 05:16:36.792151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.786 [2024-07-25 05:16:36.792159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.786 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:32.786 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:32.786 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:32.786 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:32.786 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:32.786 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.786 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:32.786 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.786 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:32.786 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.786 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:32.786 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.786 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:32.786 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.786 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:32.786 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.786 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:32.786 [2024-07-25 05:16:36.958970] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.045 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.045 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:33.045 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.045 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:33.045 Malloc0 00:10:33.045 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.045 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:33.045 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.045 05:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:33.045 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.045 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:33.045 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.045 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:33.045 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.045 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.045 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.045 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:33.045 [2024-07-25 05:16:37.017347] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.045 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=69328 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:33.046 { 00:10:33.046 "params": { 00:10:33.046 "name": "Nvme$subsystem", 00:10:33.046 "trtype": "$TEST_TRANSPORT", 00:10:33.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:33.046 "adrfam": "ipv4", 00:10:33.046 "trsvcid": "$NVMF_PORT", 00:10:33.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:33.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:33.046 "hdgst": ${hdgst:-false}, 00:10:33.046 "ddgst": ${ddgst:-false} 00:10:33.046 }, 00:10:33.046 "method": "bdev_nvme_attach_controller" 00:10:33.046 } 00:10:33.046 EOF 00:10:33.046 )") 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=69330 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=69333 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:33.046 { 00:10:33.046 "params": { 00:10:33.046 "name": "Nvme$subsystem", 00:10:33.046 "trtype": "$TEST_TRANSPORT", 00:10:33.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:33.046 "adrfam": "ipv4", 00:10:33.046 "trsvcid": "$NVMF_PORT", 00:10:33.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:33.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:33.046 "hdgst": ${hdgst:-false}, 00:10:33.046 "ddgst": ${ddgst:-false} 00:10:33.046 }, 00:10:33.046 "method": "bdev_nvme_attach_controller" 00:10:33.046 } 00:10:33.046 EOF 00:10:33.046 )") 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=69335 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:33.046 { 00:10:33.046 "params": { 00:10:33.046 "name": "Nvme$subsystem", 00:10:33.046 "trtype": "$TEST_TRANSPORT", 00:10:33.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:33.046 "adrfam": "ipv4", 00:10:33.046 "trsvcid": "$NVMF_PORT", 00:10:33.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:33.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:33.046 "hdgst": ${hdgst:-false}, 00:10:33.046 "ddgst": ${ddgst:-false} 00:10:33.046 }, 00:10:33.046 "method": "bdev_nvme_attach_controller" 00:10:33.046 } 00:10:33.046 EOF 00:10:33.046 )") 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:33.046 "params": { 00:10:33.046 "name": "Nvme1", 00:10:33.046 "trtype": "tcp", 00:10:33.046 "traddr": "10.0.0.2", 00:10:33.046 "adrfam": "ipv4", 00:10:33.046 "trsvcid": "4420", 00:10:33.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:33.046 "hdgst": false, 00:10:33.046 "ddgst": false 00:10:33.046 }, 00:10:33.046 "method": "bdev_nvme_attach_controller" 00:10:33.046 }' 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:33.046 "params": { 00:10:33.046 "name": "Nvme1", 00:10:33.046 "trtype": "tcp", 00:10:33.046 "traddr": "10.0.0.2", 00:10:33.046 "adrfam": "ipv4", 00:10:33.046 "trsvcid": "4420", 00:10:33.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:33.046 "hdgst": false, 00:10:33.046 "ddgst": false 00:10:33.046 }, 00:10:33.046 "method": "bdev_nvme_attach_controller" 00:10:33.046 }' 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:33.046 "params": { 00:10:33.046 "name": "Nvme1", 00:10:33.046 "trtype": "tcp", 00:10:33.046 "traddr": "10.0.0.2", 00:10:33.046 "adrfam": "ipv4", 00:10:33.046 "trsvcid": "4420", 00:10:33.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:33.046 "hdgst": false, 00:10:33.046 "ddgst": false 00:10:33.046 }, 00:10:33.046 "method": "bdev_nvme_attach_controller" 00:10:33.046 }' 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:33.046 { 00:10:33.046 "params": { 00:10:33.046 "name": "Nvme$subsystem", 00:10:33.046 "trtype": "$TEST_TRANSPORT", 00:10:33.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:33.046 "adrfam": "ipv4", 00:10:33.046 "trsvcid": "$NVMF_PORT", 00:10:33.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:33.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:33.046 "hdgst": ${hdgst:-false}, 00:10:33.046 "ddgst": ${ddgst:-false} 00:10:33.046 }, 00:10:33.046 "method": "bdev_nvme_attach_controller" 00:10:33.046 } 00:10:33.046 EOF 00:10:33.046 )") 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:33.046 "params": { 00:10:33.046 "name": "Nvme1", 00:10:33.046 "trtype": "tcp", 00:10:33.046 "traddr": "10.0.0.2", 00:10:33.046 "adrfam": "ipv4", 00:10:33.046 "trsvcid": "4420", 00:10:33.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:33.046 "hdgst": false, 00:10:33.046 "ddgst": false 00:10:33.046 }, 00:10:33.046 "method": "bdev_nvme_attach_controller" 00:10:33.046 }' 00:10:33.046 05:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 69328 00:10:33.046 [2024-07-25 05:16:37.107029] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:10:33.046 [2024-07-25 05:16:37.107151] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:33.046 [2024-07-25 05:16:37.108160] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:10:33.046 [2024-07-25 05:16:37.108282] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:33.046 [2024-07-25 05:16:37.141169] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:10:33.046 [2024-07-25 05:16:37.141264] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:33.046 [2024-07-25 05:16:37.149830] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:10:33.047 [2024-07-25 05:16:37.149936] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:33.305 [2024-07-25 05:16:37.308083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.305 [2024-07-25 05:16:37.347423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.305 [2024-07-25 05:16:37.394528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:33.305 [2024-07-25 05:16:37.403094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:33.305 [2024-07-25 05:16:37.415649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.305 [2024-07-25 05:16:37.433140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.570 [2024-07-25 05:16:37.481520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:33.570 [2024-07-25 05:16:37.486916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:33.570 Running I/O for 1 seconds... 00:10:33.570 Running I/O for 1 seconds... 00:10:33.570 Running I/O for 1 seconds... 00:10:33.570 Running I/O for 1 seconds... 00:10:34.515 00:10:34.515 Latency(us) 00:10:34.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.515 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:34.515 Nvme1n1 : 1.01 10233.78 39.98 0.00 0.00 12456.85 6911.07 19422.49 00:10:34.515 =================================================================================================================== 00:10:34.516 Total : 10233.78 39.98 0.00 0.00 12456.85 6911.07 19422.49 00:10:34.516 00:10:34.516 Latency(us) 00:10:34.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.516 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:34.516 Nvme1n1 : 1.01 7615.73 29.75 0.00 0.00 16706.88 10724.07 26452.71 00:10:34.516 =================================================================================================================== 00:10:34.516 Total : 7615.73 29.75 0.00 0.00 16706.88 10724.07 26452.71 00:10:34.516 00:10:34.516 Latency(us) 00:10:34.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.516 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:34.516 Nvme1n1 : 1.00 176599.40 689.84 0.00 0.00 722.00 422.63 2129.92 00:10:34.516 =================================================================================================================== 00:10:34.516 Total : 176599.40 689.84 0.00 0.00 722.00 422.63 2129.92 00:10:34.516 00:10:34.516 Latency(us) 00:10:34.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.516 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:34.516 Nvme1n1 : 1.01 6355.51 24.83 0.00 0.00 20035.31 7626.01 35270.28 00:10:34.516 =================================================================================================================== 00:10:34.516 Total : 6355.51 24.83 0.00 0.00 20035.31 7626.01 35270.28 00:10:34.774 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 69330 00:10:34.774 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 69333 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 69335 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:34.775 rmmod nvme_tcp 00:10:34.775 rmmod nvme_fabrics 00:10:34.775 rmmod nvme_keyring 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 69294 ']' 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 69294 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 69294 ']' 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 69294 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.775 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69294 00:10:35.033 killing process with pid 69294 00:10:35.033 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:35.033 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:35.033 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69294' 00:10:35.033 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 69294 00:10:35.033 05:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 69294 00:10:35.033 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:35.033 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:35.033 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:35.033 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:35.033 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:35.033 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.033 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.033 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.033 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:35.033 ************************************ 00:10:35.033 END TEST nvmf_bdev_io_wait 00:10:35.033 ************************************ 00:10:35.033 00:10:35.033 real 0m3.075s 00:10:35.033 user 0m13.595s 00:10:35.033 sys 0m1.974s 00:10:35.033 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.033 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:35.033 05:16:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:35.033 05:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:35.033 05:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.033 05:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:35.033 ************************************ 00:10:35.033 START TEST nvmf_queue_depth 00:10:35.033 ************************************ 00:10:35.033 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:35.291 * Looking for test storage... 00:10:35.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:35.291 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:35.291 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:35.292 Cannot find device "nvmf_tgt_br" 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:35.292 Cannot find device "nvmf_tgt_br2" 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:35.292 Cannot find device "nvmf_tgt_br" 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:35.292 Cannot find device "nvmf_tgt_br2" 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:35.292 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:35.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:10:35.550 00:10:35.550 --- 10.0.0.2 ping statistics --- 00:10:35.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.550 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:35.550 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:35.550 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:10:35.550 00:10:35.550 --- 10.0.0.3 ping statistics --- 00:10:35.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.550 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:35.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:35.550 00:10:35.550 --- 10.0.0.1 ping statistics --- 00:10:35.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.550 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:35.550 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:35.551 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:35.551 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:35.551 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=69548 00:10:35.551 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:35.551 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 69548 00:10:35.551 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 69548 ']' 00:10:35.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.551 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.551 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.551 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.551 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.551 05:16:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:35.551 [2024-07-25 05:16:39.724283] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:10:35.551 [2024-07-25 05:16:39.724374] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.808 [2024-07-25 05:16:39.861047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.808 [2024-07-25 05:16:39.921032] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.808 [2024-07-25 05:16:39.921089] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.808 [2024-07-25 05:16:39.921102] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.808 [2024-07-25 05:16:39.921111] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.808 [2024-07-25 05:16:39.921118] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.808 [2024-07-25 05:16:39.921146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:36.743 [2024-07-25 05:16:40.730974] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:36.743 Malloc0 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.743 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:36.744 [2024-07-25 05:16:40.788131] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:36.744 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.744 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=69598 00:10:36.744 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:36.744 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:36.744 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 69598 /var/tmp/bdevperf.sock 00:10:36.744 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 69598 ']' 00:10:36.744 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:36.744 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:36.744 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:36.744 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:36.744 05:16:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:36.744 [2024-07-25 05:16:40.839369] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:10:36.744 [2024-07-25 05:16:40.839456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69598 ] 00:10:37.002 [2024-07-25 05:16:40.989707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.002 [2024-07-25 05:16:41.076494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.002 05:16:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:37.002 05:16:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:37.002 05:16:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:37.002 05:16:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.002 05:16:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:37.259 NVMe0n1 00:10:37.259 05:16:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.259 05:16:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:37.259 Running I/O for 10 seconds... 00:10:49.459 00:10:49.459 Latency(us) 00:10:49.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.459 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:49.459 Verification LBA range: start 0x0 length 0x4000 00:10:49.459 NVMe0n1 : 10.07 8467.40 33.08 0.00 0.00 120363.21 16205.27 92941.96 00:10:49.459 =================================================================================================================== 00:10:49.459 Total : 8467.40 33.08 0.00 0.00 120363.21 16205.27 92941.96 00:10:49.459 0 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 69598 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 69598 ']' 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 69598 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69598 00:10:49.459 killing process with pid 69598 00:10:49.459 Received shutdown signal, test time was about 10.000000 seconds 00:10:49.459 00:10:49.459 Latency(us) 00:10:49.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.459 =================================================================================================================== 00:10:49.459 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69598' 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 69598 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 69598 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:49.459 rmmod nvme_tcp 00:10:49.459 rmmod nvme_fabrics 00:10:49.459 rmmod nvme_keyring 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 69548 ']' 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 69548 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 69548 ']' 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 69548 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69548 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69548' 00:10:49.459 killing process with pid 69548 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 69548 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 69548 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:49.459 00:10:49.459 real 0m12.769s 00:10:49.459 user 0m21.972s 00:10:49.459 sys 0m1.838s 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:49.459 ************************************ 00:10:49.459 END TEST nvmf_queue_depth 00:10:49.459 ************************************ 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:49.459 05:16:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:49.459 ************************************ 00:10:49.459 START TEST nvmf_target_multipath 00:10:49.459 ************************************ 00:10:49.459 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:49.459 * Looking for test storage... 00:10:49.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:49.459 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:49.459 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:49.459 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:49.460 Cannot find device "nvmf_tgt_br" 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:49.460 Cannot find device "nvmf_tgt_br2" 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:49.460 Cannot find device "nvmf_tgt_br" 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:49.460 Cannot find device "nvmf_tgt_br2" 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:49.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:49.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:49.460 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:49.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:10:49.461 00:10:49.461 --- 10.0.0.2 ping statistics --- 00:10:49.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.461 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:49.461 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:49.461 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:10:49.461 00:10:49.461 --- 10.0.0.3 ping statistics --- 00:10:49.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.461 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:49.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:10:49.461 00:10:49.461 --- 10.0.0.1 ping statistics --- 00:10:49.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.461 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=69918 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 69918 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 69918 ']' 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:49.461 05:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:49.461 [2024-07-25 05:16:52.515108] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:10:49.461 [2024-07-25 05:16:52.515204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.461 [2024-07-25 05:16:52.653001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.461 [2024-07-25 05:16:52.723353] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.461 [2024-07-25 05:16:52.723412] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.461 [2024-07-25 05:16:52.723426] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.461 [2024-07-25 05:16:52.723436] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.461 [2024-07-25 05:16:52.723445] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.461 [2024-07-25 05:16:52.723547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.461 [2024-07-25 05:16:52.723615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.461 [2024-07-25 05:16:52.724063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.461 [2024-07-25 05:16:52.724077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.461 05:16:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.461 05:16:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:10:49.461 05:16:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:49.461 05:16:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:49.461 05:16:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:49.461 05:16:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.461 05:16:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:49.719 [2024-07-25 05:16:53.809457] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.719 05:16:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:49.977 Malloc0 00:10:50.236 05:16:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:50.494 05:16:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:50.752 05:16:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.752 [2024-07-25 05:16:54.911632] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.010 05:16:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:51.268 [2024-07-25 05:16:55.215898] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:51.268 05:16:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:51.549 05:16:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:51.549 05:16:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:51.549 05:16:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:10:51.549 05:16:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:51.549 05:16:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:51.549 05:16:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:10:53.494 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:53.494 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:53.494 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:53.494 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=70063 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:53.752 05:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:53.752 [global] 00:10:53.752 thread=1 00:10:53.752 invalidate=1 00:10:53.752 rw=randrw 00:10:53.752 time_based=1 00:10:53.752 runtime=6 00:10:53.752 ioengine=libaio 00:10:53.752 direct=1 00:10:53.752 bs=4096 00:10:53.752 iodepth=128 00:10:53.752 norandommap=0 00:10:53.752 numjobs=1 00:10:53.752 00:10:53.752 verify_dump=1 00:10:53.752 verify_backlog=512 00:10:53.752 verify_state_save=0 00:10:53.752 do_verify=1 00:10:53.752 verify=crc32c-intel 00:10:53.752 [job0] 00:10:53.752 filename=/dev/nvme0n1 00:10:53.752 Could not set queue depth (nvme0n1) 00:10:53.752 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:53.752 fio-3.35 00:10:53.752 Starting 1 thread 00:10:54.703 05:16:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:54.960 05:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:55.527 05:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:55.527 05:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:55.527 05:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:55.527 05:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:55.527 05:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:55.527 05:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:55.527 05:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:55.527 05:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:55.527 05:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:55.527 05:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:55.527 05:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:55.527 05:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:55.527 05:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:56.462 05:17:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:56.462 05:17:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:56.462 05:17:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:56.462 05:17:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:56.721 05:17:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:56.979 05:17:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:56.979 05:17:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:56.979 05:17:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:56.979 05:17:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:56.979 05:17:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:56.979 05:17:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:56.979 05:17:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:56.979 05:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:56.979 05:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:56.979 05:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:56.979 05:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:56.979 05:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:56.979 05:17:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:57.913 05:17:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:57.913 05:17:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:57.913 05:17:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:57.913 05:17:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 70063 00:10:59.855 00:10:59.855 job0: (groupid=0, jobs=1): err= 0: pid=70084: Thu Jul 25 05:17:03 2024 00:10:59.855 read: IOPS=10.6k, BW=41.3MiB/s (43.3MB/s)(248MiB/6011msec) 00:10:59.855 slat (usec): min=3, max=6496, avg=53.97, stdev=240.64 00:10:59.855 clat (usec): min=679, max=20564, avg=8231.40, stdev=1454.20 00:10:59.855 lat (usec): min=687, max=20578, avg=8285.37, stdev=1465.82 00:10:59.855 clat percentiles (usec): 00:10:59.855 | 1.00th=[ 4817], 5.00th=[ 6259], 10.00th=[ 6915], 20.00th=[ 7308], 00:10:59.855 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8356], 00:10:59.855 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[ 9765], 95.00th=[10814], 00:10:59.855 | 99.00th=[13042], 99.50th=[14091], 99.90th=[16450], 99.95th=[17695], 00:10:59.855 | 99.99th=[19530] 00:10:59.855 bw ( KiB/s): min=11704, max=27512, per=52.49%, avg=22207.08, stdev=5902.14, samples=12 00:10:59.855 iops : min= 2926, max= 6878, avg=5551.75, stdev=1475.54, samples=12 00:10:59.855 write: IOPS=6293, BW=24.6MiB/s (25.8MB/s)(131MiB/5312msec); 0 zone resets 00:10:59.855 slat (usec): min=5, max=7311, avg=65.59, stdev=162.82 00:10:59.855 clat (usec): min=670, max=19451, avg=7145.79, stdev=1356.57 00:10:59.855 lat (usec): min=709, max=19484, avg=7211.39, stdev=1362.98 00:10:59.855 clat percentiles (usec): 00:10:59.855 | 1.00th=[ 3884], 5.00th=[ 5080], 10.00th=[ 5932], 20.00th=[ 6390], 00:10:59.855 | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7308], 00:10:59.855 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8356], 95.00th=[ 9241], 00:10:59.855 | 99.00th=[12256], 99.50th=[13698], 99.90th=[16057], 99.95th=[16450], 00:10:59.855 | 99.99th=[17695] 00:10:59.855 bw ( KiB/s): min=12288, max=27376, per=88.34%, avg=22237.58, stdev=5558.90, samples=12 00:10:59.855 iops : min= 3072, max= 6844, avg=5559.33, stdev=1389.73, samples=12 00:10:59.855 lat (usec) : 750=0.01%, 1000=0.01% 00:10:59.855 lat (msec) : 2=0.07%, 4=0.57%, 10=92.59%, 20=6.75%, 50=0.01% 00:10:59.855 cpu : usr=5.92%, sys=24.26%, ctx=6410, majf=0, minf=121 00:10:59.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:59.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.855 issued rwts: total=63580,33430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.855 00:10:59.855 Run status group 0 (all jobs): 00:10:59.855 READ: bw=41.3MiB/s (43.3MB/s), 41.3MiB/s-41.3MiB/s (43.3MB/s-43.3MB/s), io=248MiB (260MB), run=6011-6011msec 00:10:59.855 WRITE: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=131MiB (137MB), run=5312-5312msec 00:10:59.855 00:10:59.855 Disk stats (read/write): 00:10:59.855 nvme0n1: ios=62808/32908, merge=0/0, ticks=482137/218419, in_queue=700556, util=98.53% 00:10:59.855 05:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:11:00.420 05:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:00.678 05:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:00.678 05:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:00.678 05:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:00.678 05:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:00.678 05:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:00.678 05:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:00.678 05:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:00.678 05:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:00.678 05:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:00.678 05:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:00.678 05:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:00.678 05:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:11:00.678 05:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:01.611 05:17:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:01.611 05:17:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:01.611 05:17:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:01.611 05:17:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:01.611 05:17:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=70218 00:11:01.611 05:17:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:01.611 05:17:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:01.611 [global] 00:11:01.611 thread=1 00:11:01.611 invalidate=1 00:11:01.611 rw=randrw 00:11:01.611 time_based=1 00:11:01.611 runtime=6 00:11:01.611 ioengine=libaio 00:11:01.611 direct=1 00:11:01.611 bs=4096 00:11:01.611 iodepth=128 00:11:01.611 norandommap=0 00:11:01.611 numjobs=1 00:11:01.612 00:11:01.612 verify_dump=1 00:11:01.612 verify_backlog=512 00:11:01.612 verify_state_save=0 00:11:01.612 do_verify=1 00:11:01.612 verify=crc32c-intel 00:11:01.612 [job0] 00:11:01.612 filename=/dev/nvme0n1 00:11:01.612 Could not set queue depth (nvme0n1) 00:11:01.869 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.869 fio-3.35 00:11:01.869 Starting 1 thread 00:11:02.803 05:17:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:03.061 05:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:03.320 05:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:03.320 05:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:03.320 05:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:03.320 05:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:03.320 05:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:03.320 05:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:03.320 05:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:03.320 05:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:03.320 05:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:03.320 05:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:03.320 05:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:03.320 05:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:03.320 05:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:04.253 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:04.253 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:04.253 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:04.253 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:04.512 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:04.770 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:04.770 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:04.770 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:04.770 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:04.770 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:04.770 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:04.770 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:04.770 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:04.770 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:04.770 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:04.770 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:04.770 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:04.770 05:17:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:06.145 05:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:06.145 05:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:06.145 05:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:06.145 05:17:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 70218 00:11:08.045 00:11:08.045 job0: (groupid=0, jobs=1): err= 0: pid=70239: Thu Jul 25 05:17:12 2024 00:11:08.045 read: IOPS=11.9k, BW=46.5MiB/s (48.8MB/s)(280MiB/6006msec) 00:11:08.045 slat (usec): min=3, max=5665, avg=41.78, stdev=204.54 00:11:08.045 clat (usec): min=320, max=18569, avg=7401.14, stdev=1968.13 00:11:08.045 lat (usec): min=346, max=18577, avg=7442.92, stdev=1982.94 00:11:08.045 clat percentiles (usec): 00:11:08.045 | 1.00th=[ 1614], 5.00th=[ 3458], 10.00th=[ 4752], 20.00th=[ 6128], 00:11:08.045 | 30.00th=[ 7111], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7832], 00:11:08.045 | 70.00th=[ 8291], 80.00th=[ 8717], 90.00th=[ 9372], 95.00th=[10290], 00:11:08.045 | 99.00th=[11863], 99.50th=[12387], 99.90th=[15139], 99.95th=[16188], 00:11:08.045 | 99.99th=[18220] 00:11:08.045 bw ( KiB/s): min= 7016, max=38176, per=53.49%, avg=25496.91, stdev=9995.15, samples=11 00:11:08.045 iops : min= 1754, max= 9544, avg=6374.18, stdev=2498.74, samples=11 00:11:08.045 write: IOPS=7184, BW=28.1MiB/s (29.4MB/s)(148MiB/5288msec); 0 zone resets 00:11:08.045 slat (usec): min=12, max=3773, avg=52.92, stdev=136.19 00:11:08.045 clat (usec): min=169, max=15900, avg=6131.97, stdev=1908.08 00:11:08.045 lat (usec): min=272, max=15920, avg=6184.89, stdev=1921.70 00:11:08.045 clat percentiles (usec): 00:11:08.045 | 1.00th=[ 1237], 5.00th=[ 2540], 10.00th=[ 3359], 20.00th=[ 4359], 00:11:08.045 | 30.00th=[ 5407], 40.00th=[ 6259], 50.00th=[ 6652], 60.00th=[ 6915], 00:11:08.045 | 70.00th=[ 7177], 80.00th=[ 7504], 90.00th=[ 8029], 95.00th=[ 8586], 00:11:08.045 | 99.00th=[10159], 99.50th=[10945], 99.90th=[13829], 99.95th=[14222], 00:11:08.045 | 99.99th=[15533] 00:11:08.045 bw ( KiB/s): min= 7376, max=38696, per=88.76%, avg=25505.00, stdev=9794.49, samples=11 00:11:08.045 iops : min= 1844, max= 9674, avg=6376.18, stdev=2448.56, samples=11 00:11:08.045 lat (usec) : 250=0.01%, 500=0.03%, 750=0.11%, 1000=0.18% 00:11:08.045 lat (msec) : 2=2.01%, 4=7.43%, 10=85.87%, 20=4.36% 00:11:08.045 cpu : usr=5.88%, sys=24.96%, ctx=7753, majf=0, minf=121 00:11:08.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:11:08.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.045 issued rwts: total=71563,37989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.045 00:11:08.045 Run status group 0 (all jobs): 00:11:08.045 READ: bw=46.5MiB/s (48.8MB/s), 46.5MiB/s-46.5MiB/s (48.8MB/s-48.8MB/s), io=280MiB (293MB), run=6006-6006msec 00:11:08.045 WRITE: bw=28.1MiB/s (29.4MB/s), 28.1MiB/s-28.1MiB/s (29.4MB/s-29.4MB/s), io=148MiB (156MB), run=5288-5288msec 00:11:08.045 00:11:08.045 Disk stats (read/write): 00:11:08.045 nvme0n1: ios=70608/37339, merge=0/0, ticks=488535/211565, in_queue=700100, util=98.65% 00:11:08.045 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:08.045 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.045 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:11:08.045 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:08.045 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.045 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:08.045 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.045 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:11:08.045 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:08.303 rmmod nvme_tcp 00:11:08.303 rmmod nvme_fabrics 00:11:08.303 rmmod nvme_keyring 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 69918 ']' 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 69918 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 69918 ']' 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 69918 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69918 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:08.303 killing process with pid 69918 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69918' 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 69918 00:11:08.303 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 69918 00:11:08.561 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:08.561 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:08.561 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:08.561 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:08.561 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:08.561 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.561 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.561 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.562 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:08.562 00:11:08.562 real 0m20.669s 00:11:08.562 user 1m21.925s 00:11:08.562 sys 0m6.636s 00:11:08.562 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.562 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:08.562 ************************************ 00:11:08.562 END TEST nvmf_target_multipath 00:11:08.562 ************************************ 00:11:08.562 05:17:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:08.562 05:17:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:08.562 05:17:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.562 05:17:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:08.562 ************************************ 00:11:08.562 START TEST nvmf_zcopy 00:11:08.562 ************************************ 00:11:08.562 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:08.820 * Looking for test storage... 00:11:08.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:08.820 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:08.820 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:08.820 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.820 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.820 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.820 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.820 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.820 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.820 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.820 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.820 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.820 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.820 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:11:08.820 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:11:08.820 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:08.821 Cannot find device "nvmf_tgt_br" 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:08.821 Cannot find device "nvmf_tgt_br2" 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:08.821 Cannot find device "nvmf_tgt_br" 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:08.821 Cannot find device "nvmf_tgt_br2" 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:08.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:08.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:08.821 05:17:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:09.079 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:09.079 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:09.079 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:09.079 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:09.079 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:09.079 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:09.079 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:09.079 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:09.079 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:09.079 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:09.079 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:09.079 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:09.079 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:09.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:11:09.080 00:11:09.080 --- 10.0.0.2 ping statistics --- 00:11:09.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.080 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:09.080 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:09.080 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:11:09.080 00:11:09.080 --- 10.0.0.3 ping statistics --- 00:11:09.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.080 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:09.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:11:09.080 00:11:09.080 --- 10.0.0.1 ping statistics --- 00:11:09.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.080 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=70521 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 70521 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 70521 ']' 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:09.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:09.080 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.080 [2024-07-25 05:17:13.230073] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:11:09.080 [2024-07-25 05:17:13.230192] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.338 [2024-07-25 05:17:13.374729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.338 [2024-07-25 05:17:13.432483] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.338 [2024-07-25 05:17:13.432711] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.338 [2024-07-25 05:17:13.433121] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.338 [2024-07-25 05:17:13.433527] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.338 [2024-07-25 05:17:13.433641] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.338 [2024-07-25 05:17:13.433877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.596 [2024-07-25 05:17:13.562587] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.596 [2024-07-25 05:17:13.578703] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.596 malloc0 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:09.596 { 00:11:09.596 "params": { 00:11:09.596 "name": "Nvme$subsystem", 00:11:09.596 "trtype": "$TEST_TRANSPORT", 00:11:09.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.596 "adrfam": "ipv4", 00:11:09.596 "trsvcid": "$NVMF_PORT", 00:11:09.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.596 "hdgst": ${hdgst:-false}, 00:11:09.596 "ddgst": ${ddgst:-false} 00:11:09.596 }, 00:11:09.596 "method": "bdev_nvme_attach_controller" 00:11:09.596 } 00:11:09.596 EOF 00:11:09.596 )") 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:09.596 05:17:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:09.596 "params": { 00:11:09.596 "name": "Nvme1", 00:11:09.596 "trtype": "tcp", 00:11:09.596 "traddr": "10.0.0.2", 00:11:09.596 "adrfam": "ipv4", 00:11:09.596 "trsvcid": "4420", 00:11:09.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.596 "hdgst": false, 00:11:09.596 "ddgst": false 00:11:09.596 }, 00:11:09.596 "method": "bdev_nvme_attach_controller" 00:11:09.596 }' 00:11:09.596 [2024-07-25 05:17:13.675514] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:11:09.596 [2024-07-25 05:17:13.675626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70553 ] 00:11:09.854 [2024-07-25 05:17:13.815402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.854 [2024-07-25 05:17:13.886795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.112 Running I/O for 10 seconds... 00:11:20.092 00:11:20.092 Latency(us) 00:11:20.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.092 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:20.092 Verification LBA range: start 0x0 length 0x1000 00:11:20.092 Nvme1n1 : 10.01 5890.77 46.02 0.00 0.00 21657.89 2457.60 29789.09 00:11:20.092 =================================================================================================================== 00:11:20.092 Total : 5890.77 46.02 0.00 0.00 21657.89 2457.60 29789.09 00:11:20.092 05:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=70675 00:11:20.092 05:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:20.092 05:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:20.092 05:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:20.092 05:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:20.092 05:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:20.092 05:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:20.092 05:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:20.092 05:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:20.092 { 00:11:20.092 "params": { 00:11:20.092 "name": "Nvme$subsystem", 00:11:20.092 "trtype": "$TEST_TRANSPORT", 00:11:20.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:20.092 "adrfam": "ipv4", 00:11:20.092 "trsvcid": "$NVMF_PORT", 00:11:20.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:20.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:20.092 "hdgst": ${hdgst:-false}, 00:11:20.092 "ddgst": ${ddgst:-false} 00:11:20.092 }, 00:11:20.092 "method": "bdev_nvme_attach_controller" 00:11:20.092 } 00:11:20.092 EOF 00:11:20.092 )") 00:11:20.092 05:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:20.092 [2024-07-25 05:17:24.235126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.092 [2024-07-25 05:17:24.235173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.092 05:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:20.092 05:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:20.092 05:17:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:20.092 "params": { 00:11:20.092 "name": "Nvme1", 00:11:20.092 "trtype": "tcp", 00:11:20.092 "traddr": "10.0.0.2", 00:11:20.092 "adrfam": "ipv4", 00:11:20.092 "trsvcid": "4420", 00:11:20.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:20.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:20.092 "hdgst": false, 00:11:20.092 "ddgst": false 00:11:20.092 }, 00:11:20.092 "method": "bdev_nvme_attach_controller" 00:11:20.092 }' 00:11:20.092 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.092 [2024-07-25 05:17:24.247102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.092 [2024-07-25 05:17:24.247133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.092 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.092 [2024-07-25 05:17:24.259102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.092 [2024-07-25 05:17:24.259131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.092 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.351 [2024-07-25 05:17:24.271104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.351 [2024-07-25 05:17:24.271134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.351 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.351 [2024-07-25 05:17:24.279097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.351 [2024-07-25 05:17:24.279125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.351 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.351 [2024-07-25 05:17:24.286244] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:11:20.351 [2024-07-25 05:17:24.286327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70675 ] 00:11:20.351 [2024-07-25 05:17:24.291125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.351 [2024-07-25 05:17:24.291159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.351 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.351 [2024-07-25 05:17:24.299126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.351 [2024-07-25 05:17:24.299164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.351 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.351 [2024-07-25 05:17:24.307132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.351 [2024-07-25 05:17:24.307170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.351 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.351 [2024-07-25 05:17:24.315120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.351 [2024-07-25 05:17:24.315155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.351 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.351 [2024-07-25 05:17:24.323163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.351 [2024-07-25 05:17:24.323209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.351 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.351 [2024-07-25 05:17:24.335164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.351 [2024-07-25 05:17:24.335206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.351 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.351 [2024-07-25 05:17:24.347141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.351 [2024-07-25 05:17:24.347174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.351 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.351 [2024-07-25 05:17:24.359131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.351 [2024-07-25 05:17:24.359161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.351 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.351 [2024-07-25 05:17:24.371180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.351 [2024-07-25 05:17:24.371231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.351 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.351 [2024-07-25 05:17:24.383149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.351 [2024-07-25 05:17:24.383185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.351 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.351 [2024-07-25 05:17:24.395186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.351 [2024-07-25 05:17:24.395235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.351 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.351 [2024-07-25 05:17:24.407163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.351 [2024-07-25 05:17:24.407201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.351 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.351 [2024-07-25 05:17:24.419197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.351 [2024-07-25 05:17:24.419246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.351 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.351 [2024-07-25 05:17:24.430188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.351 [2024-07-25 05:17:24.431180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.351 [2024-07-25 05:17:24.431212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.351 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.351 [2024-07-25 05:17:24.443176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.351 [2024-07-25 05:17:24.443215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.352 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.352 [2024-07-25 05:17:24.455166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.352 [2024-07-25 05:17:24.455197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.352 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.352 [2024-07-25 05:17:24.467155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.352 [2024-07-25 05:17:24.467182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.352 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.352 [2024-07-25 05:17:24.475154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.352 [2024-07-25 05:17:24.475179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.352 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.352 [2024-07-25 05:17:24.483184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.352 [2024-07-25 05:17:24.483224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.352 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.352 [2024-07-25 05:17:24.495187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.352 [2024-07-25 05:17:24.495224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.352 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.352 [2024-07-25 05:17:24.503178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.352 [2024-07-25 05:17:24.503215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.352 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.352 [2024-07-25 05:17:24.511163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.352 [2024-07-25 05:17:24.511192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.352 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.352 [2024-07-25 05:17:24.520925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.611 [2024-07-25 05:17:24.527261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.611 [2024-07-25 05:17:24.527327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.611 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.611 [2024-07-25 05:17:24.539237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.611 [2024-07-25 05:17:24.539285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.611 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.611 [2024-07-25 05:17:24.551250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.611 [2024-07-25 05:17:24.551300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.611 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.611 [2024-07-25 05:17:24.563239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.611 [2024-07-25 05:17:24.563284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.611 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.611 [2024-07-25 05:17:24.575234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.611 [2024-07-25 05:17:24.575280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.611 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.611 [2024-07-25 05:17:24.587205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.611 [2024-07-25 05:17:24.587239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.611 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.611 [2024-07-25 05:17:24.599239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.611 [2024-07-25 05:17:24.599277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.611 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.611 [2024-07-25 05:17:24.611218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.611 [2024-07-25 05:17:24.611252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.611 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.611 [2024-07-25 05:17:24.623227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.611 [2024-07-25 05:17:24.623263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.611 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.611 [2024-07-25 05:17:24.635227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.611 [2024-07-25 05:17:24.635261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.611 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.611 [2024-07-25 05:17:24.647226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.611 [2024-07-25 05:17:24.647257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.611 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.611 [2024-07-25 05:17:24.659247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.611 [2024-07-25 05:17:24.659283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.611 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.611 Running I/O for 5 seconds... 00:11:20.611 [2024-07-25 05:17:24.667238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.611 [2024-07-25 05:17:24.667269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.611 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.611 [2024-07-25 05:17:24.681441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.611 [2024-07-25 05:17:24.681482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.611 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.611 [2024-07-25 05:17:24.696816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.611 [2024-07-25 05:17:24.696855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.611 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.611 [2024-07-25 05:17:24.706600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.612 [2024-07-25 05:17:24.706637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.612 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.612 [2024-07-25 05:17:24.722689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.612 [2024-07-25 05:17:24.722730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.612 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.612 [2024-07-25 05:17:24.738433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.612 [2024-07-25 05:17:24.738480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.612 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.612 [2024-07-25 05:17:24.754893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.612 [2024-07-25 05:17:24.754932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.612 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.612 [2024-07-25 05:17:24.771678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.612 [2024-07-25 05:17:24.771716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.612 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:24.788520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:24.788561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:24.806760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:24.806806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:24.822041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:24.822080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:24.832429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:24.832468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:24.843991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:24.844032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:24.861286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:24.861330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:24.876531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:24.876571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:24.886758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:24.886799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:24.902048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:24.902105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:24.918870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:24.918909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:24.935753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:24.935793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:24.952704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:24.952743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:24.968097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:24.968134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:24.978946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:24.979000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:24.993468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:24.993513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:25.009102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:25.009166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:25.026699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:25.026753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.877 [2024-07-25 05:17:25.043119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.877 [2024-07-25 05:17:25.043182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.877 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.136 [2024-07-25 05:17:25.059103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.136 [2024-07-25 05:17:25.059151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.136 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.136 [2024-07-25 05:17:25.069834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.136 [2024-07-25 05:17:25.069879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.136 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.136 [2024-07-25 05:17:25.081021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.136 [2024-07-25 05:17:25.081079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.136 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.136 [2024-07-25 05:17:25.097044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.136 [2024-07-25 05:17:25.097087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.136 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.136 [2024-07-25 05:17:25.112476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.136 [2024-07-25 05:17:25.112534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.136 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.136 [2024-07-25 05:17:25.128484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.136 [2024-07-25 05:17:25.128524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.136 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.136 [2024-07-25 05:17:25.146422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.136 [2024-07-25 05:17:25.146473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.136 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.136 [2024-07-25 05:17:25.161492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.136 [2024-07-25 05:17:25.161533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.136 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.136 [2024-07-25 05:17:25.177690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.137 [2024-07-25 05:17:25.177730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.137 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.137 [2024-07-25 05:17:25.193464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.137 [2024-07-25 05:17:25.193504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.137 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.137 [2024-07-25 05:17:25.205788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.137 [2024-07-25 05:17:25.205851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.137 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.137 [2024-07-25 05:17:25.223529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.137 [2024-07-25 05:17:25.223581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.137 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.137 [2024-07-25 05:17:25.239047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.137 [2024-07-25 05:17:25.239088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.137 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.137 [2024-07-25 05:17:25.256851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.137 [2024-07-25 05:17:25.256915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.137 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.137 [2024-07-25 05:17:25.273701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.137 [2024-07-25 05:17:25.273760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.137 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.137 [2024-07-25 05:17:25.289491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.137 [2024-07-25 05:17:25.289548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.137 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.137 [2024-07-25 05:17:25.307511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.137 [2024-07-25 05:17:25.307571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.137 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.394 [2024-07-25 05:17:25.324200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.394 [2024-07-25 05:17:25.324242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.394 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.394 [2024-07-25 05:17:25.340260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.394 [2024-07-25 05:17:25.340318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.394 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.394 [2024-07-25 05:17:25.357375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.394 [2024-07-25 05:17:25.357419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.394 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.394 [2024-07-25 05:17:25.373317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.394 [2024-07-25 05:17:25.373376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.394 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.394 [2024-07-25 05:17:25.391086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.394 [2024-07-25 05:17:25.391131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.394 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.394 [2024-07-25 05:17:25.407384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.394 [2024-07-25 05:17:25.407426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.394 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.394 [2024-07-25 05:17:25.424336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.394 [2024-07-25 05:17:25.424382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.394 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.394 [2024-07-25 05:17:25.440031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.394 [2024-07-25 05:17:25.440074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.394 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.394 [2024-07-25 05:17:25.450918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.394 [2024-07-25 05:17:25.450970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.394 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.394 [2024-07-25 05:17:25.462066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.394 [2024-07-25 05:17:25.462105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.394 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.394 [2024-07-25 05:17:25.473057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.394 [2024-07-25 05:17:25.473097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.394 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.394 [2024-07-25 05:17:25.483924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.394 [2024-07-25 05:17:25.483978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.394 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.394 [2024-07-25 05:17:25.495529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.394 [2024-07-25 05:17:25.495571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.395 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.395 [2024-07-25 05:17:25.507417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.395 [2024-07-25 05:17:25.507459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.395 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.395 [2024-07-25 05:17:25.519240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.395 [2024-07-25 05:17:25.519282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.395 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.395 [2024-07-25 05:17:25.531944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.395 [2024-07-25 05:17:25.532017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.395 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.395 [2024-07-25 05:17:25.548688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.395 [2024-07-25 05:17:25.548742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.395 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.395 [2024-07-25 05:17:25.559718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.395 [2024-07-25 05:17:25.559759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.395 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.652 [2024-07-25 05:17:25.571277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.652 [2024-07-25 05:17:25.571318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.652 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.652 [2024-07-25 05:17:25.586811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.652 [2024-07-25 05:17:25.586870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.652 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.652 [2024-07-25 05:17:25.604773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.652 [2024-07-25 05:17:25.604820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.652 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.653 [2024-07-25 05:17:25.620747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.653 [2024-07-25 05:17:25.620791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.653 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.653 [2024-07-25 05:17:25.631322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.653 [2024-07-25 05:17:25.631362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.653 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.653 [2024-07-25 05:17:25.646309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.653 [2024-07-25 05:17:25.646349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.653 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.653 [2024-07-25 05:17:25.664016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.653 [2024-07-25 05:17:25.664060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.653 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.653 [2024-07-25 05:17:25.680155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.653 [2024-07-25 05:17:25.680199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.653 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.653 [2024-07-25 05:17:25.690546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.653 [2024-07-25 05:17:25.690585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.653 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.653 [2024-07-25 05:17:25.705525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.653 [2024-07-25 05:17:25.705588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.653 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.653 [2024-07-25 05:17:25.721356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.653 [2024-07-25 05:17:25.721396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.653 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.653 [2024-07-25 05:17:25.738332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.653 [2024-07-25 05:17:25.738392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.653 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.653 [2024-07-25 05:17:25.753674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.653 [2024-07-25 05:17:25.753720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.653 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.653 [2024-07-25 05:17:25.763708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.653 [2024-07-25 05:17:25.763750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.653 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.653 [2024-07-25 05:17:25.778792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.653 [2024-07-25 05:17:25.778831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.653 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.653 [2024-07-25 05:17:25.795078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.653 [2024-07-25 05:17:25.795117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.653 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.653 [2024-07-25 05:17:25.811908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.653 [2024-07-25 05:17:25.811948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.653 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.653 [2024-07-25 05:17:25.827793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.653 [2024-07-25 05:17:25.827830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.911 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.911 [2024-07-25 05:17:25.844441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.911 [2024-07-25 05:17:25.844479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.911 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.911 [2024-07-25 05:17:25.861692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.911 [2024-07-25 05:17:25.861732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.911 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.911 [2024-07-25 05:17:25.876800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.911 [2024-07-25 05:17:25.876838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.911 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.911 [2024-07-25 05:17:25.887256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.911 [2024-07-25 05:17:25.887294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.911 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.911 [2024-07-25 05:17:25.901690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.912 [2024-07-25 05:17:25.901727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.912 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.912 [2024-07-25 05:17:25.912155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.912 [2024-07-25 05:17:25.912192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.912 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.912 [2024-07-25 05:17:25.927272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.912 [2024-07-25 05:17:25.927310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.912 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.912 [2024-07-25 05:17:25.939466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.912 [2024-07-25 05:17:25.939503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.912 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.912 [2024-07-25 05:17:25.957529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.912 [2024-07-25 05:17:25.957570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.912 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.912 [2024-07-25 05:17:25.972896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.912 [2024-07-25 05:17:25.972945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.912 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.912 [2024-07-25 05:17:25.983200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.912 [2024-07-25 05:17:25.983238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.912 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.912 [2024-07-25 05:17:25.998247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.912 [2024-07-25 05:17:25.998288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.912 2024/07/25 05:17:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.912 [2024-07-25 05:17:26.008890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.912 [2024-07-25 05:17:26.008930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.912 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.912 [2024-07-25 05:17:26.023797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.912 [2024-07-25 05:17:26.023837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.912 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.912 [2024-07-25 05:17:26.034582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.912 [2024-07-25 05:17:26.034634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.912 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.912 [2024-07-25 05:17:26.050196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.912 [2024-07-25 05:17:26.050241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.912 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.912 [2024-07-25 05:17:26.066938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.912 [2024-07-25 05:17:26.066992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.912 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.912 [2024-07-25 05:17:26.084133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.912 [2024-07-25 05:17:26.084176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.171 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.171 [2024-07-25 05:17:26.100040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.171 [2024-07-25 05:17:26.100081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.171 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.171 [2024-07-25 05:17:26.110306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.171 [2024-07-25 05:17:26.110343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.171 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.171 [2024-07-25 05:17:26.124310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.171 [2024-07-25 05:17:26.124350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.171 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.171 [2024-07-25 05:17:26.141790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.171 [2024-07-25 05:17:26.141854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.171 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.171 [2024-07-25 05:17:26.157484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.171 [2024-07-25 05:17:26.157531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.171 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.171 [2024-07-25 05:17:26.173255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.171 [2024-07-25 05:17:26.173298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.171 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.171 [2024-07-25 05:17:26.189264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.171 [2024-07-25 05:17:26.189329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.171 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.171 [2024-07-25 05:17:26.200319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.171 [2024-07-25 05:17:26.200361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.171 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.171 [2024-07-25 05:17:26.215317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.171 [2024-07-25 05:17:26.215357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.171 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.171 [2024-07-25 05:17:26.232322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.171 [2024-07-25 05:17:26.232391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.171 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.171 [2024-07-25 05:17:26.248712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.171 [2024-07-25 05:17:26.248762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.171 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.171 [2024-07-25 05:17:26.265808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.171 [2024-07-25 05:17:26.265848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.171 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.171 [2024-07-25 05:17:26.281828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.171 [2024-07-25 05:17:26.281870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.171 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.171 [2024-07-25 05:17:26.298692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.171 [2024-07-25 05:17:26.298731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.171 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.171 [2024-07-25 05:17:26.316085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.171 [2024-07-25 05:17:26.316127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.171 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.171 [2024-07-25 05:17:26.331552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.171 [2024-07-25 05:17:26.331597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.171 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.431 [2024-07-25 05:17:26.348104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.431 [2024-07-25 05:17:26.348165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.431 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.431 [2024-07-25 05:17:26.365332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.431 [2024-07-25 05:17:26.365372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.431 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.431 [2024-07-25 05:17:26.381038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.431 [2024-07-25 05:17:26.381080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.431 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.431 [2024-07-25 05:17:26.397628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.431 [2024-07-25 05:17:26.397665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.431 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.431 [2024-07-25 05:17:26.413129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.431 [2024-07-25 05:17:26.413166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.431 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.431 [2024-07-25 05:17:26.428901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.431 [2024-07-25 05:17:26.428940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.431 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.431 [2024-07-25 05:17:26.446423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.431 [2024-07-25 05:17:26.446473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.431 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.431 [2024-07-25 05:17:26.462084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.431 [2024-07-25 05:17:26.462131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.431 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.431 [2024-07-25 05:17:26.479111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.431 [2024-07-25 05:17:26.479149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.431 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.431 [2024-07-25 05:17:26.494883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.431 [2024-07-25 05:17:26.494924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.431 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.431 [2024-07-25 05:17:26.512273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.431 [2024-07-25 05:17:26.512315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.431 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.431 [2024-07-25 05:17:26.522929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.431 [2024-07-25 05:17:26.522976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.431 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.431 [2024-07-25 05:17:26.537748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.431 [2024-07-25 05:17:26.537794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.431 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.431 [2024-07-25 05:17:26.550827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.431 [2024-07-25 05:17:26.550887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.431 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.431 [2024-07-25 05:17:26.567772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.431 [2024-07-25 05:17:26.567832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.431 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.431 [2024-07-25 05:17:26.580938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.431 [2024-07-25 05:17:26.581008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.431 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.431 [2024-07-25 05:17:26.595366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.431 [2024-07-25 05:17:26.595421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.431 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.690 [2024-07-25 05:17:26.607759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.690 [2024-07-25 05:17:26.607806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.690 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.690 [2024-07-25 05:17:26.619383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.690 [2024-07-25 05:17:26.619427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.690 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.690 [2024-07-25 05:17:26.630574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.690 [2024-07-25 05:17:26.630618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.690 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.690 [2024-07-25 05:17:26.646231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.690 [2024-07-25 05:17:26.646274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.690 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.690 [2024-07-25 05:17:26.656491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.690 [2024-07-25 05:17:26.656531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.690 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.690 [2024-07-25 05:17:26.667642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.690 [2024-07-25 05:17:26.667683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.690 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.690 [2024-07-25 05:17:26.683791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.690 [2024-07-25 05:17:26.683843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.691 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.691 [2024-07-25 05:17:26.694131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.691 [2024-07-25 05:17:26.694170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.691 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.691 [2024-07-25 05:17:26.709833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.691 [2024-07-25 05:17:26.709876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.691 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.691 [2024-07-25 05:17:26.726822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.691 [2024-07-25 05:17:26.726889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.691 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.691 [2024-07-25 05:17:26.743477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.691 [2024-07-25 05:17:26.743533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.691 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.691 [2024-07-25 05:17:26.760633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.691 [2024-07-25 05:17:26.760680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.691 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.691 [2024-07-25 05:17:26.770940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.691 [2024-07-25 05:17:26.770993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.691 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.691 [2024-07-25 05:17:26.782092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.691 [2024-07-25 05:17:26.782133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.691 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.691 [2024-07-25 05:17:26.795186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.691 [2024-07-25 05:17:26.795231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.691 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.691 [2024-07-25 05:17:26.806395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.691 [2024-07-25 05:17:26.806451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.691 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.691 [2024-07-25 05:17:26.819451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.691 [2024-07-25 05:17:26.819492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.691 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.691 [2024-07-25 05:17:26.830015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.691 [2024-07-25 05:17:26.830056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.691 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.691 [2024-07-25 05:17:26.841197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.691 [2024-07-25 05:17:26.841260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.691 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.691 [2024-07-25 05:17:26.852559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.691 [2024-07-25 05:17:26.852621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.691 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.949 [2024-07-25 05:17:26.866581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.949 [2024-07-25 05:17:26.866635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.949 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.949 [2024-07-25 05:17:26.880814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.949 [2024-07-25 05:17:26.880877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.949 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.949 [2024-07-25 05:17:26.897169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.949 [2024-07-25 05:17:26.897228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.949 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.949 [2024-07-25 05:17:26.911936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.949 [2024-07-25 05:17:26.912012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.949 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.949 [2024-07-25 05:17:26.930282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.949 [2024-07-25 05:17:26.930342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.949 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.949 [2024-07-25 05:17:26.947138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.949 [2024-07-25 05:17:26.947198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.949 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.949 [2024-07-25 05:17:26.964454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.949 [2024-07-25 05:17:26.964517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.949 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.949 [2024-07-25 05:17:26.977610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.949 [2024-07-25 05:17:26.977668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.949 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.949 [2024-07-25 05:17:26.993575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.949 [2024-07-25 05:17:26.993616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.949 2024/07/25 05:17:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.949 [2024-07-25 05:17:27.004303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.949 [2024-07-25 05:17:27.004344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.949 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.949 [2024-07-25 05:17:27.015281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.949 [2024-07-25 05:17:27.015321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.949 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.949 [2024-07-25 05:17:27.028199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.949 [2024-07-25 05:17:27.028239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.949 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.949 [2024-07-25 05:17:27.038377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.949 [2024-07-25 05:17:27.038424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.949 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.949 [2024-07-25 05:17:27.049137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.949 [2024-07-25 05:17:27.049177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.949 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.949 [2024-07-25 05:17:27.060341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.949 [2024-07-25 05:17:27.060381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.950 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.950 [2024-07-25 05:17:27.073386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.950 [2024-07-25 05:17:27.073425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.950 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.950 [2024-07-25 05:17:27.089193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.950 [2024-07-25 05:17:27.089233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.950 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.950 [2024-07-25 05:17:27.104503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.950 [2024-07-25 05:17:27.104543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.950 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.950 [2024-07-25 05:17:27.123360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.950 [2024-07-25 05:17:27.123406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.208 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.208 [2024-07-25 05:17:27.137631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.208 [2024-07-25 05:17:27.137673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.208 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.208 [2024-07-25 05:17:27.153745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.208 [2024-07-25 05:17:27.153786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.208 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.208 [2024-07-25 05:17:27.170612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.208 [2024-07-25 05:17:27.170653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.208 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.208 [2024-07-25 05:17:27.186442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.208 [2024-07-25 05:17:27.186499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.208 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.208 [2024-07-25 05:17:27.197105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.208 [2024-07-25 05:17:27.197145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.208 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.208 [2024-07-25 05:17:27.208592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.208 [2024-07-25 05:17:27.208634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.208 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.208 [2024-07-25 05:17:27.219523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.208 [2024-07-25 05:17:27.219564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.208 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.208 [2024-07-25 05:17:27.230510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.208 [2024-07-25 05:17:27.230550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.208 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.208 [2024-07-25 05:17:27.241401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.208 [2024-07-25 05:17:27.241439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.208 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.208 [2024-07-25 05:17:27.255542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.208 [2024-07-25 05:17:27.255582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.208 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.208 [2024-07-25 05:17:27.266137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.208 [2024-07-25 05:17:27.266178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.209 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.209 [2024-07-25 05:17:27.276912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.209 [2024-07-25 05:17:27.276964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.209 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.209 [2024-07-25 05:17:27.287969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.209 [2024-07-25 05:17:27.288035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.209 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.209 [2024-07-25 05:17:27.303344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.209 [2024-07-25 05:17:27.303405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.209 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.209 [2024-07-25 05:17:27.319297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.209 [2024-07-25 05:17:27.319349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.209 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.209 [2024-07-25 05:17:27.335137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.209 [2024-07-25 05:17:27.335178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.209 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.209 [2024-07-25 05:17:27.352519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.209 [2024-07-25 05:17:27.352567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.209 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.209 [2024-07-25 05:17:27.365895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.209 [2024-07-25 05:17:27.365974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.209 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.209 [2024-07-25 05:17:27.380508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.209 [2024-07-25 05:17:27.380568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.209 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.467 [2024-07-25 05:17:27.395045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.467 [2024-07-25 05:17:27.395099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.467 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.467 [2024-07-25 05:17:27.411749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.467 [2024-07-25 05:17:27.411838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.467 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.467 [2024-07-25 05:17:27.428654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.467 [2024-07-25 05:17:27.428721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.467 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.467 [2024-07-25 05:17:27.440599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.467 [2024-07-25 05:17:27.440642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.467 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.467 [2024-07-25 05:17:27.452433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.467 [2024-07-25 05:17:27.452472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.467 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.467 [2024-07-25 05:17:27.463098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.467 [2024-07-25 05:17:27.463139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.467 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.467 [2024-07-25 05:17:27.478184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.467 [2024-07-25 05:17:27.478225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.467 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.467 [2024-07-25 05:17:27.488893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.467 [2024-07-25 05:17:27.488933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.467 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.467 [2024-07-25 05:17:27.499594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.467 [2024-07-25 05:17:27.499634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.467 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.467 [2024-07-25 05:17:27.510438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.467 [2024-07-25 05:17:27.510487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.467 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.467 [2024-07-25 05:17:27.521352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.467 [2024-07-25 05:17:27.521392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.467 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.467 [2024-07-25 05:17:27.538182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.467 [2024-07-25 05:17:27.538222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.467 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.467 [2024-07-25 05:17:27.554836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.467 [2024-07-25 05:17:27.554887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.467 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.467 [2024-07-25 05:17:27.572859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.467 [2024-07-25 05:17:27.572902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.468 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.468 [2024-07-25 05:17:27.587846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.468 [2024-07-25 05:17:27.587889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.468 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.468 [2024-07-25 05:17:27.598072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.468 [2024-07-25 05:17:27.598112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.468 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.468 [2024-07-25 05:17:27.609795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.468 [2024-07-25 05:17:27.609833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.468 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.468 [2024-07-25 05:17:27.624816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.468 [2024-07-25 05:17:27.624856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.468 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.468 [2024-07-25 05:17:27.642120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.468 [2024-07-25 05:17:27.642161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.726 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.658182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.658225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.727 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.668924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.668979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.727 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.684072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.684112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.727 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.700815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.700857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.727 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.722003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.722043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.727 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.737589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.737626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.727 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.754819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.754859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.727 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.770829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.770876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.727 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.787832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.787886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.727 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.804802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.804851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.727 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.816325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.816369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.727 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.826991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.827033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.727 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.838133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.838173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.727 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.851096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.851136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.727 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.861541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.861581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.727 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.872432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.872472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.727 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.883523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.883563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.727 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.727 [2024-07-25 05:17:27.899319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.727 [2024-07-25 05:17:27.899390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:27.915132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:27.915179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:27.932757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:27.932801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:27.948099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:27.948147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:27.957329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:27.957367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:27.973286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:27.973352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:27.989326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:27.989378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:28.006044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:28.006084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:28.023106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:28.023147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:28.038525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:28.038570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:28.051062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:28.051102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:28.060667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:28.060707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:28.076377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:28.076418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:28.091827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:28.091867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:28.102751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:28.102792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:28.117293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:28.117334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:28.128079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:28.128118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:28.139005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:28.139038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.986 [2024-07-25 05:17:28.155651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.986 [2024-07-25 05:17:28.155696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.986 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.244 [2024-07-25 05:17:28.172194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.244 [2024-07-25 05:17:28.172241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.244 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.245 [2024-07-25 05:17:28.188568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.245 [2024-07-25 05:17:28.188608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.245 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.245 [2024-07-25 05:17:28.204516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.245 [2024-07-25 05:17:28.204558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.245 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.245 [2024-07-25 05:17:28.215238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.245 [2024-07-25 05:17:28.215279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.245 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.245 [2024-07-25 05:17:28.230032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.245 [2024-07-25 05:17:28.230074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.245 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.245 [2024-07-25 05:17:28.246946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.245 [2024-07-25 05:17:28.246996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.245 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.245 [2024-07-25 05:17:28.262600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.245 [2024-07-25 05:17:28.262641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.245 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.245 [2024-07-25 05:17:28.280177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.245 [2024-07-25 05:17:28.280218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.245 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.245 [2024-07-25 05:17:28.296475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.245 [2024-07-25 05:17:28.296516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.245 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.245 [2024-07-25 05:17:28.307266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.245 [2024-07-25 05:17:28.307305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.245 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.245 [2024-07-25 05:17:28.321882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.245 [2024-07-25 05:17:28.321947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.245 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.245 [2024-07-25 05:17:28.332617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.245 [2024-07-25 05:17:28.332675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.245 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.245 [2024-07-25 05:17:28.353456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.245 [2024-07-25 05:17:28.353502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.245 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.245 [2024-07-25 05:17:28.364370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.245 [2024-07-25 05:17:28.364417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.245 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.245 [2024-07-25 05:17:28.375272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.245 [2024-07-25 05:17:28.375312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.245 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.245 [2024-07-25 05:17:28.386596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.245 [2024-07-25 05:17:28.386634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.245 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.245 [2024-07-25 05:17:28.397820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.245 [2024-07-25 05:17:28.397859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.245 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.245 [2024-07-25 05:17:28.409566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.245 [2024-07-25 05:17:28.409605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.245 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.425201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.425239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.435646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.435685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.450285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.450325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.466025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.466071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.476821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.476859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.491693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.491735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.508908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.508988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.525676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.525718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.541094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.541132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.551662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.551700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.566627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.566667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.576761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.576801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.591788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.591827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.602358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.602396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.613559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.613598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.629770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.629810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.640588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.640629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.655324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.655364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.504 [2024-07-25 05:17:28.672130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.504 [2024-07-25 05:17:28.672169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.504 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.762 [2024-07-25 05:17:28.688083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.762 [2024-07-25 05:17:28.688123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.762 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.762 [2024-07-25 05:17:28.705173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.762 [2024-07-25 05:17:28.705212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.762 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.762 [2024-07-25 05:17:28.715475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.762 [2024-07-25 05:17:28.715514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.762 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.762 [2024-07-25 05:17:28.726667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.762 [2024-07-25 05:17:28.726709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.762 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.762 [2024-07-25 05:17:28.743678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.762 [2024-07-25 05:17:28.743749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.762 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.762 [2024-07-25 05:17:28.761357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.762 [2024-07-25 05:17:28.761427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.762 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.762 [2024-07-25 05:17:28.777069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.762 [2024-07-25 05:17:28.777113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.762 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.762 [2024-07-25 05:17:28.787885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.762 [2024-07-25 05:17:28.787926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.762 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.762 [2024-07-25 05:17:28.800980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.762 [2024-07-25 05:17:28.801020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.762 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.763 [2024-07-25 05:17:28.813499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.763 [2024-07-25 05:17:28.813539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.763 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.763 [2024-07-25 05:17:28.829791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.763 [2024-07-25 05:17:28.829860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.763 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.763 [2024-07-25 05:17:28.846824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.763 [2024-07-25 05:17:28.846890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.763 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.763 [2024-07-25 05:17:28.858300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.763 [2024-07-25 05:17:28.858344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.763 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.763 [2024-07-25 05:17:28.870816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.763 [2024-07-25 05:17:28.870860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.763 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.763 [2024-07-25 05:17:28.883241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.763 [2024-07-25 05:17:28.883285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.763 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.763 [2024-07-25 05:17:28.894853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.763 [2024-07-25 05:17:28.894902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.763 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.763 [2024-07-25 05:17:28.906310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.763 [2024-07-25 05:17:28.906352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.763 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.763 [2024-07-25 05:17:28.917638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.763 [2024-07-25 05:17:28.917681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.763 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:24.763 [2024-07-25 05:17:28.928901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.763 [2024-07-25 05:17:28.928945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.763 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.021 [2024-07-25 05:17:28.939756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.021 [2024-07-25 05:17:28.939795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.021 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.021 [2024-07-25 05:17:28.950915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.021 [2024-07-25 05:17:28.950990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.021 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:28.962021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:28.962072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:28.973240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:28.973283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:28.983908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:28.983963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:28.998661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:28.998702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:29.009200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:29.009239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:29.019595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:29.019635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:29.030546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:29.030583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:29.041400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:29.041439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:29.052589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:29.052635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:29.068067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:29.068125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:29.078574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:29.078613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:29.089651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:29.089690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:29.101070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:29.101112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:29.111676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:29.111715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:29.123314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:29.123355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:29.134274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:29.134312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:29.145562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:29.145602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:29.156826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:29.156875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:29.169908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:29.169965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:29.180569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:29.180608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.022 [2024-07-25 05:17:29.191382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.022 [2024-07-25 05:17:29.191421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.022 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.203834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.280 [2024-07-25 05:17:29.203874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.280 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.213690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.280 [2024-07-25 05:17:29.213729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.280 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.224862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.280 [2024-07-25 05:17:29.224902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.280 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.235534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.280 [2024-07-25 05:17:29.235573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.280 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.251262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.280 [2024-07-25 05:17:29.251304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.280 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.261682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.280 [2024-07-25 05:17:29.261722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.280 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.272567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.280 [2024-07-25 05:17:29.272610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.280 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.283797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.280 [2024-07-25 05:17:29.283869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.280 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.298775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.280 [2024-07-25 05:17:29.298817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.280 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.309479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.280 [2024-07-25 05:17:29.309518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.280 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.320697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.280 [2024-07-25 05:17:29.320736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.280 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.334016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.280 [2024-07-25 05:17:29.334051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.280 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.344810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.280 [2024-07-25 05:17:29.344856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.280 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.359869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.280 [2024-07-25 05:17:29.359919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.280 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.375779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.280 [2024-07-25 05:17:29.375825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.280 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.386200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.280 [2024-07-25 05:17:29.386241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.280 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.397043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.280 [2024-07-25 05:17:29.397083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.280 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.280 [2024-07-25 05:17:29.408039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.281 [2024-07-25 05:17:29.408078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.281 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.281 [2024-07-25 05:17:29.419048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.281 [2024-07-25 05:17:29.419088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.281 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.281 [2024-07-25 05:17:29.430078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.281 [2024-07-25 05:17:29.430115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.281 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.281 [2024-07-25 05:17:29.443542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.281 [2024-07-25 05:17:29.443601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.281 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.539 [2024-07-25 05:17:29.461793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.539 [2024-07-25 05:17:29.461848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.539 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.539 [2024-07-25 05:17:29.474926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.539 [2024-07-25 05:17:29.474988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.539 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.539 [2024-07-25 05:17:29.489032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.539 [2024-07-25 05:17:29.489092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.539 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.539 [2024-07-25 05:17:29.505340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.539 [2024-07-25 05:17:29.505389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.539 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.539 [2024-07-25 05:17:29.523425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.539 [2024-07-25 05:17:29.523475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.539 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.539 [2024-07-25 05:17:29.535608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.539 [2024-07-25 05:17:29.535680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.539 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.539 [2024-07-25 05:17:29.550839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.539 [2024-07-25 05:17:29.550902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.539 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.539 [2024-07-25 05:17:29.567642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.539 [2024-07-25 05:17:29.567684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.539 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.539 [2024-07-25 05:17:29.585253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.539 [2024-07-25 05:17:29.585294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.539 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.539 [2024-07-25 05:17:29.595949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.539 [2024-07-25 05:17:29.595999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.539 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.540 [2024-07-25 05:17:29.606703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.540 [2024-07-25 05:17:29.606741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.540 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.540 [2024-07-25 05:17:29.617498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.540 [2024-07-25 05:17:29.617536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.540 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.540 [2024-07-25 05:17:29.628364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.540 [2024-07-25 05:17:29.628401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.540 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.540 [2024-07-25 05:17:29.641064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.540 [2024-07-25 05:17:29.641117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.540 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.540 [2024-07-25 05:17:29.658316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.540 [2024-07-25 05:17:29.658372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.540 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.540 [2024-07-25 05:17:29.668948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.540 [2024-07-25 05:17:29.669000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.540 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.540 [2024-07-25 05:17:29.675766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.540 [2024-07-25 05:17:29.675802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.540 00:11:25.540 Latency(us) 00:11:25.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.540 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:25.540 Nvme1n1 : 5.01 11106.69 86.77 0.00 0.00 11509.26 5004.57 22401.40 00:11:25.540 =================================================================================================================== 00:11:25.540 Total : 11106.69 86.77 0.00 0.00 11509.26 5004.57 22401.40 00:11:25.540 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.540 [2024-07-25 05:17:29.683460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.540 [2024-07-25 05:17:29.683493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.540 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.540 [2024-07-25 05:17:29.691467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.540 [2024-07-25 05:17:29.691501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.540 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.540 [2024-07-25 05:17:29.699489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.540 [2024-07-25 05:17:29.699526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.540 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.540 [2024-07-25 05:17:29.707489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.540 [2024-07-25 05:17:29.707526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.540 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.798 [2024-07-25 05:17:29.719505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.798 [2024-07-25 05:17:29.719551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.798 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.798 [2024-07-25 05:17:29.727490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.798 [2024-07-25 05:17:29.727527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.798 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.798 [2024-07-25 05:17:29.739509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.798 [2024-07-25 05:17:29.739552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.798 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.798 [2024-07-25 05:17:29.747500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.798 [2024-07-25 05:17:29.747537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.798 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.798 [2024-07-25 05:17:29.759494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.798 [2024-07-25 05:17:29.759532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.798 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.798 [2024-07-25 05:17:29.767481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.798 [2024-07-25 05:17:29.767510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.798 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.798 [2024-07-25 05:17:29.775484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.798 [2024-07-25 05:17:29.775516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.798 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.798 [2024-07-25 05:17:29.783508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.798 [2024-07-25 05:17:29.783542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.798 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.799 [2024-07-25 05:17:29.791493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.799 [2024-07-25 05:17:29.791522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.799 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.799 [2024-07-25 05:17:29.799498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.799 [2024-07-25 05:17:29.799532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.799 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.799 [2024-07-25 05:17:29.807495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.799 [2024-07-25 05:17:29.807527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.799 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.799 [2024-07-25 05:17:29.819521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.799 [2024-07-25 05:17:29.819568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.799 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.799 [2024-07-25 05:17:29.831548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.799 [2024-07-25 05:17:29.831593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.799 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.799 [2024-07-25 05:17:29.839508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.799 [2024-07-25 05:17:29.839540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.799 2024/07/25 05:17:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:25.799 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (70675) - No such process 00:11:25.799 05:17:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 70675 00:11:25.799 05:17:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.799 05:17:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.799 05:17:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:25.799 05:17:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.799 05:17:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:25.799 05:17:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.799 05:17:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:25.799 delay0 00:11:25.799 05:17:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.799 05:17:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:25.799 05:17:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.799 05:17:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:25.799 05:17:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.799 05:17:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:26.057 [2024-07-25 05:17:30.056694] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:32.701 Initializing NVMe Controllers 00:11:32.701 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:32.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:32.701 Initialization complete. Launching workers. 00:11:32.701 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 94 00:11:32.701 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 381, failed to submit 33 00:11:32.701 success 210, unsuccess 171, failed 0 00:11:32.701 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:32.701 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:32.701 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:32.701 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:32.701 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:32.702 rmmod nvme_tcp 00:11:32.702 rmmod nvme_fabrics 00:11:32.702 rmmod nvme_keyring 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 70521 ']' 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 70521 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 70521 ']' 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 70521 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70521 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:32.702 killing process with pid 70521 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70521' 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 70521 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 70521 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:32.702 ************************************ 00:11:32.702 END TEST nvmf_zcopy 00:11:32.702 ************************************ 00:11:32.702 00:11:32.702 real 0m23.701s 00:11:32.702 user 0m39.018s 00:11:32.702 sys 0m6.228s 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:32.702 ************************************ 00:11:32.702 START TEST nvmf_nmic 00:11:32.702 ************************************ 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:32.702 * Looking for test storage... 00:11:32.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.702 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:32.703 Cannot find device "nvmf_tgt_br" 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:32.703 Cannot find device "nvmf_tgt_br2" 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:32.703 Cannot find device "nvmf_tgt_br" 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:32.703 Cannot find device "nvmf_tgt_br2" 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:32.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:32.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:32.703 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:33.005 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:33.005 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:33.005 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:33.005 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:33.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:11:33.005 00:11:33.005 --- 10.0.0.2 ping statistics --- 00:11:33.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.005 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:33.005 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:33.005 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:33.005 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:11:33.005 00:11:33.005 --- 10.0.0.3 ping statistics --- 00:11:33.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.005 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:11:33.005 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:33.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:11:33.005 00:11:33.005 --- 10.0.0.1 ping statistics --- 00:11:33.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.005 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=70994 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 70994 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 70994 ']' 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:33.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:33.006 05:17:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:33.006 [2024-07-25 05:17:36.996310] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:11:33.006 [2024-07-25 05:17:36.996409] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.006 [2024-07-25 05:17:37.132067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.265 [2024-07-25 05:17:37.191398] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.265 [2024-07-25 05:17:37.191449] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.265 [2024-07-25 05:17:37.191461] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.265 [2024-07-25 05:17:37.191469] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.265 [2024-07-25 05:17:37.191476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.265 [2024-07-25 05:17:37.191641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.265 [2024-07-25 05:17:37.191748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.265 [2024-07-25 05:17:37.191807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.265 [2024-07-25 05:17:37.191814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:33.265 [2024-07-25 05:17:37.335731] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:33.265 Malloc0 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:33.265 [2024-07-25 05:17:37.393055] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.265 test case1: single bdev can't be used in multiple subsystems 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:33.265 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.266 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:33.266 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:33.266 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.266 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:33.266 [2024-07-25 05:17:37.416888] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:33.266 [2024-07-25 05:17:37.416931] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:33.266 [2024-07-25 05:17:37.416944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.266 2024/07/25 05:17:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.266 request: 00:11:33.266 { 00:11:33.266 "method": "nvmf_subsystem_add_ns", 00:11:33.266 "params": { 00:11:33.266 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:33.266 "namespace": { 00:11:33.266 "bdev_name": "Malloc0", 00:11:33.266 "no_auto_visible": false 00:11:33.266 } 00:11:33.266 } 00:11:33.266 } 00:11:33.266 Got JSON-RPC error response 00:11:33.266 GoRPCClient: error on JSON-RPC call 00:11:33.266 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:33.266 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:33.266 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:33.266 Adding namespace failed - expected result. 00:11:33.266 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:33.266 test case2: host connect to nvmf target in multiple paths 00:11:33.266 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:33.266 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:33.266 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.266 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:33.266 [2024-07-25 05:17:37.429125] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:33.266 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.266 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:33.526 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:33.783 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:33.783 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:33.783 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.783 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:33.783 05:17:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:35.682 05:17:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:35.682 05:17:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:35.682 05:17:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.682 05:17:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:35.682 05:17:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.682 05:17:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:35.682 05:17:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:35.682 [global] 00:11:35.682 thread=1 00:11:35.682 invalidate=1 00:11:35.682 rw=write 00:11:35.682 time_based=1 00:11:35.682 runtime=1 00:11:35.682 ioengine=libaio 00:11:35.682 direct=1 00:11:35.682 bs=4096 00:11:35.682 iodepth=1 00:11:35.682 norandommap=0 00:11:35.682 numjobs=1 00:11:35.682 00:11:35.682 verify_dump=1 00:11:35.682 verify_backlog=512 00:11:35.682 verify_state_save=0 00:11:35.682 do_verify=1 00:11:35.682 verify=crc32c-intel 00:11:35.682 [job0] 00:11:35.682 filename=/dev/nvme0n1 00:11:35.682 Could not set queue depth (nvme0n1) 00:11:35.939 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:35.939 fio-3.35 00:11:35.939 Starting 1 thread 00:11:36.873 00:11:36.873 job0: (groupid=0, jobs=1): err= 0: pid=71090: Thu Jul 25 05:17:41 2024 00:11:36.873 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:36.873 slat (nsec): min=13760, max=51264, avg=17585.36, stdev=4126.90 00:11:36.873 clat (usec): min=134, max=262, avg=155.35, stdev=14.56 00:11:36.873 lat (usec): min=149, max=281, avg=172.94, stdev=15.39 00:11:36.873 clat percentiles (usec): 00:11:36.873 | 1.00th=[ 139], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 145], 00:11:36.873 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:11:36.873 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 180], 00:11:36.873 | 99.00th=[ 219], 99.50th=[ 231], 99.90th=[ 247], 99.95th=[ 251], 00:11:36.873 | 99.99th=[ 262] 00:11:36.873 write: IOPS=3351, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1001msec); 0 zone resets 00:11:36.873 slat (usec): min=19, max=146, avg=25.83, stdev= 7.48 00:11:36.873 clat (usec): min=92, max=240, avg=109.93, stdev=10.14 00:11:36.873 lat (usec): min=114, max=335, avg=135.76, stdev=13.61 00:11:36.873 clat percentiles (usec): 00:11:36.873 | 1.00th=[ 97], 5.00th=[ 99], 10.00th=[ 100], 20.00th=[ 103], 00:11:36.873 | 30.00th=[ 105], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 111], 00:11:36.873 | 70.00th=[ 113], 80.00th=[ 116], 90.00th=[ 122], 95.00th=[ 127], 00:11:36.873 | 99.00th=[ 145], 99.50th=[ 157], 99.90th=[ 208], 99.95th=[ 225], 00:11:36.873 | 99.99th=[ 241] 00:11:36.873 bw ( KiB/s): min=12960, max=12960, per=96.67%, avg=12960.00, stdev= 0.00, samples=1 00:11:36.873 iops : min= 3240, max= 3240, avg=3240.00, stdev= 0.00, samples=1 00:11:36.873 lat (usec) : 100=4.85%, 250=95.10%, 500=0.05% 00:11:36.873 cpu : usr=2.70%, sys=10.50%, ctx=6428, majf=0, minf=2 00:11:36.873 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.873 issued rwts: total=3072,3355,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.873 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.873 00:11:36.873 Run status group 0 (all jobs): 00:11:36.873 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:11:36.873 WRITE: bw=13.1MiB/s (13.7MB/s), 13.1MiB/s-13.1MiB/s (13.7MB/s-13.7MB/s), io=13.1MiB (13.7MB), run=1001-1001msec 00:11:36.873 00:11:36.873 Disk stats (read/write): 00:11:36.873 nvme0n1: ios=2777/3072, merge=0/0, ticks=454/365, in_queue=819, util=91.28% 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:37.132 rmmod nvme_tcp 00:11:37.132 rmmod nvme_fabrics 00:11:37.132 rmmod nvme_keyring 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 70994 ']' 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 70994 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 70994 ']' 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 70994 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70994 00:11:37.132 killing process with pid 70994 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70994' 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 70994 00:11:37.132 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 70994 00:11:37.390 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:37.390 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:37.390 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:37.390 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:37.390 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:37.390 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.390 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.390 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.390 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:37.390 00:11:37.390 real 0m4.988s 00:11:37.390 user 0m16.349s 00:11:37.390 sys 0m1.254s 00:11:37.390 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.390 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:37.390 ************************************ 00:11:37.390 END TEST nvmf_nmic 00:11:37.390 ************************************ 00:11:37.390 05:17:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:37.390 05:17:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:37.390 05:17:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.390 05:17:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:37.390 ************************************ 00:11:37.390 START TEST nvmf_fio_target 00:11:37.390 ************************************ 00:11:37.390 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:37.650 * Looking for test storage... 00:11:37.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:37.650 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:37.651 Cannot find device "nvmf_tgt_br" 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:37.651 Cannot find device "nvmf_tgt_br2" 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:37.651 Cannot find device "nvmf_tgt_br" 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:37.651 Cannot find device "nvmf_tgt_br2" 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:37.651 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:37.651 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:37.651 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:37.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:11:37.910 00:11:37.910 --- 10.0.0.2 ping statistics --- 00:11:37.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.910 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:37.910 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:37.910 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:11:37.910 00:11:37.910 --- 10.0.0.3 ping statistics --- 00:11:37.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.910 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:37.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:11:37.910 00:11:37.910 --- 10.0.0.1 ping statistics --- 00:11:37.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.910 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=71266 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 71266 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 71266 ']' 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:37.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:37.910 05:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.910 [2024-07-25 05:17:42.041171] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:11:37.910 [2024-07-25 05:17:42.041261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.169 [2024-07-25 05:17:42.180178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.169 [2024-07-25 05:17:42.240621] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.169 [2024-07-25 05:17:42.240678] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.169 [2024-07-25 05:17:42.240690] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.169 [2024-07-25 05:17:42.240699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.169 [2024-07-25 05:17:42.240706] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.169 [2024-07-25 05:17:42.240775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.169 [2024-07-25 05:17:42.241032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.169 [2024-07-25 05:17:42.241644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.169 [2024-07-25 05:17:42.241705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.169 05:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:38.169 05:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:38.169 05:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:38.169 05:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:38.169 05:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.427 05:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.427 05:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:38.685 [2024-07-25 05:17:42.629208] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.685 05:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.942 05:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:38.942 05:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:39.199 05:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:39.199 05:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:39.455 05:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:39.455 05:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:39.711 05:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:39.711 05:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:40.000 05:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:40.258 05:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:40.258 05:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:40.515 05:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:40.515 05:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:40.773 05:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:40.773 05:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:41.031 05:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:41.289 05:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:41.289 05:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:41.552 05:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:41.552 05:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:41.816 05:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.074 [2024-07-25 05:17:46.199471] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.074 05:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:42.335 05:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:42.901 05:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.901 05:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:42.901 05:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:42.901 05:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.901 05:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:42.901 05:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:42.901 05:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:44.816 05:17:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:44.816 05:17:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:44.816 05:17:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.816 05:17:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:44.816 05:17:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.816 05:17:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:44.816 05:17:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:44.816 [global] 00:11:44.816 thread=1 00:11:44.816 invalidate=1 00:11:44.816 rw=write 00:11:44.816 time_based=1 00:11:44.816 runtime=1 00:11:44.816 ioengine=libaio 00:11:44.816 direct=1 00:11:44.816 bs=4096 00:11:44.816 iodepth=1 00:11:44.816 norandommap=0 00:11:44.816 numjobs=1 00:11:44.816 00:11:44.816 verify_dump=1 00:11:44.816 verify_backlog=512 00:11:44.816 verify_state_save=0 00:11:44.816 do_verify=1 00:11:44.816 verify=crc32c-intel 00:11:44.816 [job0] 00:11:44.816 filename=/dev/nvme0n1 00:11:44.816 [job1] 00:11:44.816 filename=/dev/nvme0n2 00:11:44.816 [job2] 00:11:44.816 filename=/dev/nvme0n3 00:11:44.816 [job3] 00:11:44.816 filename=/dev/nvme0n4 00:11:45.074 Could not set queue depth (nvme0n1) 00:11:45.074 Could not set queue depth (nvme0n2) 00:11:45.074 Could not set queue depth (nvme0n3) 00:11:45.074 Could not set queue depth (nvme0n4) 00:11:45.074 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:45.074 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:45.074 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:45.074 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:45.074 fio-3.35 00:11:45.074 Starting 4 threads 00:11:46.448 00:11:46.448 job0: (groupid=0, jobs=1): err= 0: pid=71548: Thu Jul 25 05:17:50 2024 00:11:46.448 read: IOPS=1268, BW=5075KiB/s (5197kB/s)(5080KiB/1001msec) 00:11:46.448 slat (nsec): min=11396, max=85306, avg=22956.09, stdev=8235.13 00:11:46.448 clat (usec): min=254, max=780, avg=423.23, stdev=121.13 00:11:46.448 lat (usec): min=278, max=805, avg=446.19, stdev=122.19 00:11:46.448 clat percentiles (usec): 00:11:46.448 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 293], 00:11:46.448 | 30.00th=[ 306], 40.00th=[ 347], 50.00th=[ 441], 60.00th=[ 474], 00:11:46.448 | 70.00th=[ 510], 80.00th=[ 553], 90.00th=[ 586], 95.00th=[ 611], 00:11:46.448 | 99.00th=[ 644], 99.50th=[ 660], 99.90th=[ 725], 99.95th=[ 783], 00:11:46.448 | 99.99th=[ 783] 00:11:46.448 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:46.448 slat (usec): min=16, max=107, avg=32.21, stdev=11.13 00:11:46.448 clat (usec): min=102, max=651, avg=244.85, stdev=55.58 00:11:46.448 lat (usec): min=127, max=677, avg=277.05, stdev=52.43 00:11:46.448 clat percentiles (usec): 00:11:46.448 | 1.00th=[ 139], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 202], 00:11:46.448 | 30.00th=[ 210], 40.00th=[ 223], 50.00th=[ 239], 60.00th=[ 253], 00:11:46.448 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 338], 00:11:46.448 | 99.00th=[ 420], 99.50th=[ 494], 99.90th=[ 611], 99.95th=[ 652], 00:11:46.448 | 99.99th=[ 652] 00:11:46.448 bw ( KiB/s): min= 8192, max= 8192, per=25.52%, avg=8192.00, stdev= 0.00, samples=1 00:11:46.448 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:46.448 lat (usec) : 250=31.82%, 500=52.92%, 750=15.22%, 1000=0.04% 00:11:46.448 cpu : usr=2.00%, sys=6.00%, ctx=2809, majf=0, minf=3 00:11:46.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.448 issued rwts: total=1270,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.448 job1: (groupid=0, jobs=1): err= 0: pid=71549: Thu Jul 25 05:17:50 2024 00:11:46.448 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:46.448 slat (nsec): min=9040, max=89268, avg=19163.18, stdev=6310.63 00:11:46.448 clat (usec): min=136, max=7229, avg=316.06, stdev=326.84 00:11:46.448 lat (usec): min=154, max=7245, avg=335.22, stdev=328.00 00:11:46.448 clat percentiles (usec): 00:11:46.448 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:11:46.448 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 215], 00:11:46.448 | 70.00th=[ 469], 80.00th=[ 519], 90.00th=[ 570], 95.00th=[ 594], 00:11:46.448 | 99.00th=[ 660], 99.50th=[ 832], 99.90th=[ 4948], 99.95th=[ 7242], 00:11:46.448 | 99.99th=[ 7242] 00:11:46.448 write: IOPS=2025, BW=8104KiB/s (8298kB/s)(8112KiB/1001msec); 0 zone resets 00:11:46.448 slat (usec): min=14, max=131, avg=29.09, stdev=10.01 00:11:46.448 clat (usec): min=101, max=441, avg=205.99, stdev=83.56 00:11:46.448 lat (usec): min=125, max=475, avg=235.08, stdev=83.41 00:11:46.448 clat percentiles (usec): 00:11:46.448 | 1.00th=[ 108], 5.00th=[ 113], 10.00th=[ 117], 20.00th=[ 123], 00:11:46.448 | 30.00th=[ 129], 40.00th=[ 143], 50.00th=[ 186], 60.00th=[ 249], 00:11:46.448 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 355], 00:11:46.448 | 99.00th=[ 392], 99.50th=[ 404], 99.90th=[ 420], 99.95th=[ 424], 00:11:46.448 | 99.99th=[ 441] 00:11:46.448 bw ( KiB/s): min=10384, max=10384, per=32.35%, avg=10384.00, stdev= 0.00, samples=1 00:11:46.448 iops : min= 2596, max= 2596, avg=2596.00, stdev= 0.00, samples=1 00:11:46.448 lat (usec) : 250=60.83%, 500=29.35%, 750=9.57%, 1000=0.08% 00:11:46.448 lat (msec) : 4=0.08%, 10=0.08% 00:11:46.448 cpu : usr=2.10%, sys=6.80%, ctx=3566, majf=0, minf=11 00:11:46.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.448 issued rwts: total=1536,2028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.448 job2: (groupid=0, jobs=1): err= 0: pid=71550: Thu Jul 25 05:17:50 2024 00:11:46.448 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:46.448 slat (nsec): min=13548, max=58570, avg=22118.15, stdev=4657.24 00:11:46.448 clat (usec): min=163, max=494, avg=296.77, stdev=42.20 00:11:46.448 lat (usec): min=180, max=516, avg=318.89, stdev=43.20 00:11:46.448 clat percentiles (usec): 00:11:46.448 | 1.00th=[ 184], 5.00th=[ 245], 10.00th=[ 253], 20.00th=[ 265], 00:11:46.448 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 306], 00:11:46.448 | 70.00th=[ 314], 80.00th=[ 326], 90.00th=[ 351], 95.00th=[ 375], 00:11:46.448 | 99.00th=[ 408], 99.50th=[ 437], 99.90th=[ 490], 99.95th=[ 494], 00:11:46.448 | 99.99th=[ 494] 00:11:46.448 write: IOPS=1906, BW=7624KiB/s (7807kB/s)(7632KiB/1001msec); 0 zone resets 00:11:46.448 slat (usec): min=21, max=109, avg=32.54, stdev= 6.08 00:11:46.448 clat (usec): min=121, max=2050, avg=230.38, stdev=59.99 00:11:46.448 lat (usec): min=153, max=2086, avg=262.93, stdev=60.42 00:11:46.448 clat percentiles (usec): 00:11:46.448 | 1.00th=[ 135], 5.00th=[ 188], 10.00th=[ 198], 20.00th=[ 204], 00:11:46.448 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:11:46.448 | 70.00th=[ 237], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 297], 00:11:46.448 | 99.00th=[ 355], 99.50th=[ 416], 99.90th=[ 709], 99.95th=[ 2057], 00:11:46.448 | 99.99th=[ 2057] 00:11:46.448 bw ( KiB/s): min= 8192, max= 8192, per=25.52%, avg=8192.00, stdev= 0.00, samples=1 00:11:46.448 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:46.448 lat (usec) : 250=44.19%, 500=55.69%, 750=0.09% 00:11:46.448 lat (msec) : 4=0.03% 00:11:46.448 cpu : usr=1.90%, sys=7.00%, ctx=3445, majf=0, minf=12 00:11:46.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.448 issued rwts: total=1536,1908,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.448 job3: (groupid=0, jobs=1): err= 0: pid=71551: Thu Jul 25 05:17:50 2024 00:11:46.449 read: IOPS=2046, BW=8188KiB/s (8384kB/s)(8196KiB/1001msec) 00:11:46.449 slat (nsec): min=14206, max=65689, avg=18896.85, stdev=5588.08 00:11:46.449 clat (usec): min=156, max=655, avg=193.17, stdev=49.17 00:11:46.449 lat (usec): min=171, max=672, avg=212.06, stdev=52.66 00:11:46.449 clat percentiles (usec): 00:11:46.449 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:11:46.449 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:11:46.449 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 297], 00:11:46.449 | 99.00th=[ 449], 99.50th=[ 461], 99.90th=[ 482], 99.95th=[ 553], 00:11:46.449 | 99.99th=[ 660] 00:11:46.449 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:46.449 slat (usec): min=19, max=112, avg=30.63, stdev=13.29 00:11:46.449 clat (usec): min=104, max=1155, avg=186.08, stdev=105.70 00:11:46.449 lat (usec): min=137, max=1195, avg=216.71, stdev=116.39 00:11:46.449 clat percentiles (usec): 00:11:46.449 | 1.00th=[ 120], 5.00th=[ 123], 10.00th=[ 125], 20.00th=[ 129], 00:11:46.449 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 143], 00:11:46.449 | 70.00th=[ 149], 80.00th=[ 192], 90.00th=[ 396], 95.00th=[ 420], 00:11:46.449 | 99.00th=[ 457], 99.50th=[ 474], 99.90th=[ 578], 99.95th=[ 611], 00:11:46.449 | 99.99th=[ 1156] 00:11:46.449 bw ( KiB/s): min= 8192, max= 8192, per=25.52%, avg=8192.00, stdev= 0.00, samples=1 00:11:46.449 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:46.449 lat (usec) : 250=86.92%, 500=12.97%, 750=0.09% 00:11:46.449 lat (msec) : 2=0.02% 00:11:46.449 cpu : usr=2.70%, sys=8.50%, ctx=4609, majf=0, minf=11 00:11:46.449 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.449 issued rwts: total=2049,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.449 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.449 00:11:46.449 Run status group 0 (all jobs): 00:11:46.449 READ: bw=24.9MiB/s (26.2MB/s), 5075KiB/s-8188KiB/s (5197kB/s-8384kB/s), io=25.0MiB (26.2MB), run=1001-1001msec 00:11:46.449 WRITE: bw=31.3MiB/s (32.9MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=31.4MiB (32.9MB), run=1001-1001msec 00:11:46.449 00:11:46.449 Disk stats (read/write): 00:11:46.449 nvme0n1: ios=1074/1512, merge=0/0, ticks=426/369, in_queue=795, util=86.87% 00:11:46.449 nvme0n2: ios=1542/1536, merge=0/0, ticks=483/295, in_queue=778, util=86.37% 00:11:46.449 nvme0n3: ios=1395/1536, merge=0/0, ticks=422/364, in_queue=786, util=89.05% 00:11:46.449 nvme0n4: ios=1739/2048, merge=0/0, ticks=350/430, in_queue=780, util=89.61% 00:11:46.449 05:17:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:46.449 [global] 00:11:46.449 thread=1 00:11:46.449 invalidate=1 00:11:46.449 rw=randwrite 00:11:46.449 time_based=1 00:11:46.449 runtime=1 00:11:46.449 ioengine=libaio 00:11:46.449 direct=1 00:11:46.449 bs=4096 00:11:46.449 iodepth=1 00:11:46.449 norandommap=0 00:11:46.449 numjobs=1 00:11:46.449 00:11:46.449 verify_dump=1 00:11:46.449 verify_backlog=512 00:11:46.449 verify_state_save=0 00:11:46.449 do_verify=1 00:11:46.449 verify=crc32c-intel 00:11:46.449 [job0] 00:11:46.449 filename=/dev/nvme0n1 00:11:46.449 [job1] 00:11:46.449 filename=/dev/nvme0n2 00:11:46.449 [job2] 00:11:46.449 filename=/dev/nvme0n3 00:11:46.449 [job3] 00:11:46.449 filename=/dev/nvme0n4 00:11:46.449 Could not set queue depth (nvme0n1) 00:11:46.449 Could not set queue depth (nvme0n2) 00:11:46.449 Could not set queue depth (nvme0n3) 00:11:46.449 Could not set queue depth (nvme0n4) 00:11:46.449 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.449 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.449 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.449 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.449 fio-3.35 00:11:46.449 Starting 4 threads 00:11:47.823 00:11:47.823 job0: (groupid=0, jobs=1): err= 0: pid=71610: Thu Jul 25 05:17:51 2024 00:11:47.823 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:47.823 slat (nsec): min=13066, max=47321, avg=15752.99, stdev=3608.88 00:11:47.823 clat (usec): min=146, max=510, avg=181.98, stdev=25.67 00:11:47.823 lat (usec): min=161, max=524, avg=197.73, stdev=26.33 00:11:47.823 clat percentiles (usec): 00:11:47.823 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 165], 00:11:47.823 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:11:47.823 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 225], 95.00th=[ 239], 00:11:47.823 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 306], 99.95th=[ 310], 00:11:47.823 | 99.99th=[ 510] 00:11:47.823 write: IOPS=2980, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1001msec); 0 zone resets 00:11:47.823 slat (nsec): min=18258, max=99709, avg=23270.38, stdev=6316.83 00:11:47.823 clat (usec): min=106, max=1562, avg=138.78, stdev=35.56 00:11:47.823 lat (usec): min=126, max=1583, avg=162.05, stdev=38.43 00:11:47.823 clat percentiles (usec): 00:11:47.823 | 1.00th=[ 112], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 123], 00:11:47.823 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 137], 00:11:47.823 | 70.00th=[ 141], 80.00th=[ 151], 90.00th=[ 172], 95.00th=[ 182], 00:11:47.823 | 99.00th=[ 208], 99.50th=[ 223], 99.90th=[ 482], 99.95th=[ 490], 00:11:47.823 | 99.99th=[ 1565] 00:11:47.823 bw ( KiB/s): min=12263, max=12263, per=32.00%, avg=12263.00, stdev= 0.00, samples=1 00:11:47.823 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:11:47.823 lat (usec) : 250=98.56%, 500=1.41%, 750=0.02% 00:11:47.823 lat (msec) : 2=0.02% 00:11:47.823 cpu : usr=2.60%, sys=7.90%, ctx=5546, majf=0, minf=8 00:11:47.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.823 issued rwts: total=2560,2983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.823 job1: (groupid=0, jobs=1): err= 0: pid=71611: Thu Jul 25 05:17:51 2024 00:11:47.823 read: IOPS=2689, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec) 00:11:47.823 slat (nsec): min=13199, max=47312, avg=17410.95, stdev=4043.10 00:11:47.823 clat (usec): min=137, max=830, avg=168.26, stdev=22.55 00:11:47.823 lat (usec): min=151, max=844, avg=185.67, stdev=23.58 00:11:47.823 clat percentiles (usec): 00:11:47.823 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:11:47.823 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:11:47.823 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 202], 00:11:47.823 | 99.00th=[ 233], 99.50th=[ 243], 99.90th=[ 285], 99.95th=[ 474], 00:11:47.823 | 99.99th=[ 832] 00:11:47.823 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:47.823 slat (nsec): min=15955, max=76592, avg=27332.93, stdev=10868.64 00:11:47.823 clat (usec): min=97, max=1466, avg=131.56, stdev=33.74 00:11:47.823 lat (usec): min=118, max=1485, avg=158.89, stdev=38.87 00:11:47.823 clat percentiles (usec): 00:11:47.823 | 1.00th=[ 105], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 117], 00:11:47.823 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 127], 60.00th=[ 131], 00:11:47.823 | 70.00th=[ 137], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 163], 00:11:47.823 | 99.00th=[ 188], 99.50th=[ 260], 99.90th=[ 420], 99.95th=[ 570], 00:11:47.823 | 99.99th=[ 1467] 00:11:47.823 bw ( KiB/s): min=12288, max=12288, per=32.06%, avg=12288.00, stdev= 0.00, samples=1 00:11:47.823 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:47.823 lat (usec) : 100=0.09%, 250=99.50%, 500=0.36%, 750=0.02%, 1000=0.02% 00:11:47.823 lat (msec) : 2=0.02% 00:11:47.823 cpu : usr=2.80%, sys=9.60%, ctx=5764, majf=0, minf=7 00:11:47.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.823 issued rwts: total=2692,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.823 job2: (groupid=0, jobs=1): err= 0: pid=71612: Thu Jul 25 05:17:51 2024 00:11:47.823 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:47.823 slat (nsec): min=13812, max=61674, avg=25640.24, stdev=7302.19 00:11:47.823 clat (usec): min=161, max=780, avg=290.51, stdev=35.32 00:11:47.823 lat (usec): min=184, max=803, avg=316.15, stdev=36.59 00:11:47.823 clat percentiles (usec): 00:11:47.823 | 1.00th=[ 208], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 269], 00:11:47.823 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:11:47.823 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 347], 00:11:47.823 | 99.00th=[ 412], 99.50th=[ 433], 99.90th=[ 594], 99.95th=[ 783], 00:11:47.823 | 99.99th=[ 783] 00:11:47.823 write: IOPS=1998, BW=7992KiB/s (8184kB/s)(8000KiB/1001msec); 0 zone resets 00:11:47.823 slat (nsec): min=19989, max=98950, avg=37148.16, stdev=9404.92 00:11:47.823 clat (usec): min=108, max=536, avg=214.94, stdev=52.59 00:11:47.823 lat (usec): min=137, max=573, avg=252.09, stdev=54.00 00:11:47.823 clat percentiles (usec): 00:11:47.823 | 1.00th=[ 124], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 153], 00:11:47.823 | 30.00th=[ 198], 40.00th=[ 215], 50.00th=[ 225], 60.00th=[ 231], 00:11:47.823 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 273], 95.00th=[ 289], 00:11:47.823 | 99.00th=[ 359], 99.50th=[ 375], 99.90th=[ 416], 99.95th=[ 537], 00:11:47.823 | 99.99th=[ 537] 00:11:47.823 bw ( KiB/s): min= 8192, max= 8192, per=21.37%, avg=8192.00, stdev= 0.00, samples=1 00:11:47.823 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:47.823 lat (usec) : 250=45.70%, 500=54.21%, 750=0.06%, 1000=0.03% 00:11:47.823 cpu : usr=2.20%, sys=8.50%, ctx=3558, majf=0, minf=13 00:11:47.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.823 issued rwts: total=1536,2000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.823 job3: (groupid=0, jobs=1): err= 0: pid=71613: Thu Jul 25 05:17:51 2024 00:11:47.823 read: IOPS=1467, BW=5870KiB/s (6011kB/s)(5876KiB/1001msec) 00:11:47.823 slat (nsec): min=14348, max=80405, avg=22726.84, stdev=8624.66 00:11:47.823 clat (usec): min=202, max=665, avg=342.34, stdev=64.07 00:11:47.823 lat (usec): min=218, max=728, avg=365.07, stdev=67.71 00:11:47.823 clat percentiles (usec): 00:11:47.823 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 293], 00:11:47.823 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 326], 00:11:47.823 | 70.00th=[ 375], 80.00th=[ 412], 90.00th=[ 429], 95.00th=[ 445], 00:11:47.823 | 99.00th=[ 570], 99.50th=[ 611], 99.90th=[ 627], 99.95th=[ 668], 00:11:47.823 | 99.99th=[ 668] 00:11:47.823 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:47.823 slat (usec): min=20, max=126, avg=35.06, stdev=12.35 00:11:47.823 clat (usec): min=124, max=7687, avg=261.29, stdev=279.87 00:11:47.823 lat (usec): min=147, max=7712, avg=296.35, stdev=280.02 00:11:47.823 clat percentiles (usec): 00:11:47.823 | 1.00th=[ 141], 5.00th=[ 194], 10.00th=[ 210], 20.00th=[ 221], 00:11:47.823 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 249], 00:11:47.823 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 314], 00:11:47.823 | 99.00th=[ 412], 99.50th=[ 963], 99.90th=[ 5080], 99.95th=[ 7701], 00:11:47.823 | 99.99th=[ 7701] 00:11:47.823 bw ( KiB/s): min= 8192, max= 8192, per=21.37%, avg=8192.00, stdev= 0.00, samples=1 00:11:47.823 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:47.823 lat (usec) : 250=31.11%, 500=67.62%, 750=1.00%, 1000=0.03% 00:11:47.823 lat (msec) : 2=0.07%, 4=0.07%, 10=0.10% 00:11:47.823 cpu : usr=2.00%, sys=6.40%, ctx=3005, majf=0, minf=19 00:11:47.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.823 issued rwts: total=1469,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.823 00:11:47.823 Run status group 0 (all jobs): 00:11:47.824 READ: bw=32.2MiB/s (33.8MB/s), 5870KiB/s-10.5MiB/s (6011kB/s-11.0MB/s), io=32.3MiB (33.8MB), run=1001-1001msec 00:11:47.824 WRITE: bw=37.4MiB/s (39.2MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=37.5MiB (39.3MB), run=1001-1001msec 00:11:47.824 00:11:47.824 Disk stats (read/write): 00:11:47.824 nvme0n1: ios=2169/2560, merge=0/0, ticks=424/383, in_queue=807, util=86.87% 00:11:47.824 nvme0n2: ios=2343/2560, merge=0/0, ticks=453/365, in_queue=818, util=89.26% 00:11:47.824 nvme0n3: ios=1526/1536, merge=0/0, ticks=486/328, in_queue=814, util=89.70% 00:11:47.824 nvme0n4: ios=1077/1536, merge=0/0, ticks=368/404, in_queue=772, util=88.77% 00:11:47.824 05:17:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:47.824 [global] 00:11:47.824 thread=1 00:11:47.824 invalidate=1 00:11:47.824 rw=write 00:11:47.824 time_based=1 00:11:47.824 runtime=1 00:11:47.824 ioengine=libaio 00:11:47.824 direct=1 00:11:47.824 bs=4096 00:11:47.824 iodepth=128 00:11:47.824 norandommap=0 00:11:47.824 numjobs=1 00:11:47.824 00:11:47.824 verify_dump=1 00:11:47.824 verify_backlog=512 00:11:47.824 verify_state_save=0 00:11:47.824 do_verify=1 00:11:47.824 verify=crc32c-intel 00:11:47.824 [job0] 00:11:47.824 filename=/dev/nvme0n1 00:11:47.824 [job1] 00:11:47.824 filename=/dev/nvme0n2 00:11:47.824 [job2] 00:11:47.824 filename=/dev/nvme0n3 00:11:47.824 [job3] 00:11:47.824 filename=/dev/nvme0n4 00:11:47.824 Could not set queue depth (nvme0n1) 00:11:47.824 Could not set queue depth (nvme0n2) 00:11:47.824 Could not set queue depth (nvme0n3) 00:11:47.824 Could not set queue depth (nvme0n4) 00:11:47.824 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.824 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.824 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.824 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.824 fio-3.35 00:11:47.824 Starting 4 threads 00:11:49.199 00:11:49.199 job0: (groupid=0, jobs=1): err= 0: pid=71671: Thu Jul 25 05:17:53 2024 00:11:49.199 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:11:49.199 slat (usec): min=3, max=9205, avg=194.56, stdev=921.66 00:11:49.199 clat (usec): min=17187, max=40380, avg=25027.09, stdev=3633.46 00:11:49.199 lat (usec): min=17198, max=40394, avg=25221.65, stdev=3733.71 00:11:49.199 clat percentiles (usec): 00:11:49.199 | 1.00th=[18220], 5.00th=[21103], 10.00th=[21890], 20.00th=[22676], 00:11:49.199 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23987], 00:11:49.199 | 70.00th=[25560], 80.00th=[27657], 90.00th=[30016], 95.00th=[32900], 00:11:49.199 | 99.00th=[35914], 99.50th=[36963], 99.90th=[40109], 99.95th=[40109], 00:11:49.199 | 99.99th=[40633] 00:11:49.199 write: IOPS=2611, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1007msec); 0 zone resets 00:11:49.199 slat (usec): min=4, max=11758, avg=185.85, stdev=1059.10 00:11:49.199 clat (usec): min=1684, max=38069, avg=23819.20, stdev=3827.77 00:11:49.199 lat (usec): min=6916, max=38078, avg=24005.05, stdev=3910.42 00:11:49.199 clat percentiles (usec): 00:11:49.199 | 1.00th=[ 9372], 5.00th=[19530], 10.00th=[20841], 20.00th=[21890], 00:11:49.199 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23462], 00:11:49.199 | 70.00th=[24511], 80.00th=[25560], 90.00th=[30278], 95.00th=[32375], 00:11:49.199 | 99.00th=[33817], 99.50th=[35390], 99.90th=[38011], 99.95th=[38011], 00:11:49.199 | 99.99th=[38011] 00:11:49.199 bw ( KiB/s): min= 8192, max=12288, per=16.32%, avg=10240.00, stdev=2896.31, samples=2 00:11:49.199 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:11:49.199 lat (msec) : 2=0.02%, 10=0.54%, 20=3.49%, 50=95.95% 00:11:49.199 cpu : usr=1.99%, sys=6.26%, ctx=672, majf=0, minf=15 00:11:49.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:49.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.199 issued rwts: total=2560,2630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.199 job1: (groupid=0, jobs=1): err= 0: pid=71672: Thu Jul 25 05:17:53 2024 00:11:49.199 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:11:49.199 slat (usec): min=3, max=10251, avg=97.79, stdev=482.82 00:11:49.199 clat (usec): min=7061, max=42025, avg=12816.15, stdev=5737.77 00:11:49.199 lat (usec): min=7082, max=42045, avg=12913.94, stdev=5792.31 00:11:49.199 clat percentiles (usec): 00:11:49.199 | 1.00th=[ 7832], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10290], 00:11:49.199 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:11:49.199 | 70.00th=[11600], 80.00th=[12387], 90.00th=[15533], 95.00th=[28705], 00:11:49.199 | 99.00th=[34341], 99.50th=[35390], 99.90th=[36963], 99.95th=[39060], 00:11:49.199 | 99.99th=[42206] 00:11:49.199 write: IOPS=5298, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1006msec); 0 zone resets 00:11:49.199 slat (usec): min=4, max=8796, avg=86.82, stdev=414.53 00:11:49.199 clat (usec): min=660, max=43092, avg=11564.84, stdev=4450.94 00:11:49.199 lat (usec): min=6494, max=43111, avg=11651.66, stdev=4487.46 00:11:49.199 clat percentiles (usec): 00:11:49.199 | 1.00th=[ 6980], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[ 9896], 00:11:49.199 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:11:49.199 | 70.00th=[11076], 80.00th=[11338], 90.00th=[12780], 95.00th=[21627], 00:11:49.199 | 99.00th=[33817], 99.50th=[34341], 99.90th=[36439], 99.95th=[37487], 00:11:49.199 | 99.99th=[43254] 00:11:49.199 bw ( KiB/s): min=17040, max=24625, per=33.20%, avg=20832.50, stdev=5363.40, samples=2 00:11:49.199 iops : min= 4260, max= 6156, avg=5208.00, stdev=1340.67, samples=2 00:11:49.199 lat (usec) : 750=0.01% 00:11:49.199 lat (msec) : 10=19.25%, 20=73.17%, 50=7.57% 00:11:49.199 cpu : usr=4.58%, sys=14.63%, ctx=719, majf=0, minf=5 00:11:49.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:49.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.199 issued rwts: total=5120,5330,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.199 job2: (groupid=0, jobs=1): err= 0: pid=71673: Thu Jul 25 05:17:53 2024 00:11:49.199 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:11:49.199 slat (usec): min=3, max=9571, avg=197.33, stdev=948.58 00:11:49.199 clat (usec): min=15549, max=39376, avg=24740.77, stdev=3824.07 00:11:49.199 lat (usec): min=15585, max=39399, avg=24938.10, stdev=3912.99 00:11:49.199 clat percentiles (usec): 00:11:49.199 | 1.00th=[16319], 5.00th=[18744], 10.00th=[21890], 20.00th=[22938], 00:11:49.199 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:11:49.199 | 70.00th=[25297], 80.00th=[28443], 90.00th=[30016], 95.00th=[32375], 00:11:49.199 | 99.00th=[35914], 99.50th=[36963], 99.90th=[38011], 99.95th=[39060], 00:11:49.199 | 99.99th=[39584] 00:11:49.199 write: IOPS=2700, BW=10.5MiB/s (11.1MB/s)(10.6MiB/1006msec); 0 zone resets 00:11:49.199 slat (usec): min=4, max=10783, avg=176.91, stdev=1013.19 00:11:49.199 clat (usec): min=1823, max=40988, avg=23405.16, stdev=4115.10 00:11:49.199 lat (usec): min=7906, max=41182, avg=23582.07, stdev=4208.93 00:11:49.199 clat percentiles (usec): 00:11:49.199 | 1.00th=[ 8455], 5.00th=[18482], 10.00th=[20055], 20.00th=[21627], 00:11:49.199 | 30.00th=[22152], 40.00th=[22414], 50.00th=[22676], 60.00th=[23200], 00:11:49.199 | 70.00th=[23725], 80.00th=[24773], 90.00th=[28967], 95.00th=[32113], 00:11:49.199 | 99.00th=[34866], 99.50th=[36439], 99.90th=[38536], 99.95th=[39060], 00:11:49.199 | 99.99th=[41157] 00:11:49.199 bw ( KiB/s): min= 8424, max=12312, per=16.52%, avg=10368.00, stdev=2749.23, samples=2 00:11:49.199 iops : min= 2106, max= 3078, avg=2592.00, stdev=687.31, samples=2 00:11:49.199 lat (msec) : 2=0.02%, 10=0.80%, 20=8.60%, 50=90.58% 00:11:49.199 cpu : usr=3.08%, sys=5.27%, ctx=579, majf=0, minf=12 00:11:49.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:49.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.199 issued rwts: total=2560,2717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.199 job3: (groupid=0, jobs=1): err= 0: pid=71674: Thu Jul 25 05:17:53 2024 00:11:49.199 read: IOPS=4695, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1002msec) 00:11:49.199 slat (usec): min=6, max=3854, avg=100.81, stdev=471.57 00:11:49.199 clat (usec): min=351, max=17794, avg=13241.82, stdev=2117.86 00:11:49.199 lat (usec): min=2776, max=20331, avg=13342.63, stdev=2084.89 00:11:49.199 clat percentiles (usec): 00:11:49.199 | 1.00th=[ 5932], 5.00th=[10552], 10.00th=[11207], 20.00th=[11469], 00:11:49.199 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13304], 60.00th=[13435], 00:11:49.199 | 70.00th=[13829], 80.00th=[14353], 90.00th=[16712], 95.00th=[16909], 00:11:49.199 | 99.00th=[17171], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:11:49.199 | 99.99th=[17695] 00:11:49.199 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:11:49.199 slat (usec): min=8, max=4988, avg=95.26, stdev=426.12 00:11:49.199 clat (usec): min=9182, max=18004, avg=12558.06, stdev=1851.37 00:11:49.199 lat (usec): min=9212, max=18023, avg=12653.33, stdev=1852.50 00:11:49.199 clat percentiles (usec): 00:11:49.199 | 1.00th=[ 9503], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[11076], 00:11:49.199 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12387], 60.00th=[13173], 00:11:49.199 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14484], 95.00th=[16581], 00:11:49.199 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:11:49.199 | 99.99th=[17957] 00:11:49.199 bw ( KiB/s): min=20240, max=20480, per=32.45%, avg=20360.00, stdev=169.71, samples=2 00:11:49.199 iops : min= 5060, max= 5120, avg=5090.00, stdev=42.43, samples=2 00:11:49.199 lat (usec) : 500=0.01% 00:11:49.199 lat (msec) : 4=0.33%, 10=5.42%, 20=94.24% 00:11:49.199 cpu : usr=4.10%, sys=13.99%, ctx=500, majf=0, minf=9 00:11:49.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:49.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.199 issued rwts: total=4705,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.199 00:11:49.199 Run status group 0 (all jobs): 00:11:49.199 READ: bw=58.0MiB/s (60.8MB/s), 9.93MiB/s-19.9MiB/s (10.4MB/s-20.8MB/s), io=58.4MiB (61.2MB), run=1002-1007msec 00:11:49.199 WRITE: bw=61.3MiB/s (64.3MB/s), 10.2MiB/s-20.7MiB/s (10.7MB/s-21.7MB/s), io=61.7MiB (64.7MB), run=1002-1007msec 00:11:49.199 00:11:49.199 Disk stats (read/write): 00:11:49.199 nvme0n1: ios=2111/2560, merge=0/0, ticks=23150/26827, in_queue=49977, util=88.88% 00:11:49.199 nvme0n2: ios=4753/5120, merge=0/0, ticks=24823/23579, in_queue=48402, util=91.11% 00:11:49.199 nvme0n3: ios=2146/2560, merge=0/0, ticks=24198/26520, in_queue=50718, util=89.15% 00:11:49.200 nvme0n4: ios=4125/4224, merge=0/0, ticks=12892/11983, in_queue=24875, util=90.95% 00:11:49.200 05:17:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:49.200 [global] 00:11:49.200 thread=1 00:11:49.200 invalidate=1 00:11:49.200 rw=randwrite 00:11:49.200 time_based=1 00:11:49.200 runtime=1 00:11:49.200 ioengine=libaio 00:11:49.200 direct=1 00:11:49.200 bs=4096 00:11:49.200 iodepth=128 00:11:49.200 norandommap=0 00:11:49.200 numjobs=1 00:11:49.200 00:11:49.200 verify_dump=1 00:11:49.200 verify_backlog=512 00:11:49.200 verify_state_save=0 00:11:49.200 do_verify=1 00:11:49.200 verify=crc32c-intel 00:11:49.200 [job0] 00:11:49.200 filename=/dev/nvme0n1 00:11:49.200 [job1] 00:11:49.200 filename=/dev/nvme0n2 00:11:49.200 [job2] 00:11:49.200 filename=/dev/nvme0n3 00:11:49.200 [job3] 00:11:49.200 filename=/dev/nvme0n4 00:11:49.200 Could not set queue depth (nvme0n1) 00:11:49.200 Could not set queue depth (nvme0n2) 00:11:49.200 Could not set queue depth (nvme0n3) 00:11:49.200 Could not set queue depth (nvme0n4) 00:11:49.200 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.200 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.200 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.200 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.200 fio-3.35 00:11:49.200 Starting 4 threads 00:11:50.573 00:11:50.573 job0: (groupid=0, jobs=1): err= 0: pid=71727: Thu Jul 25 05:17:54 2024 00:11:50.573 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:11:50.573 slat (usec): min=4, max=8397, avg=177.22, stdev=816.90 00:11:50.573 clat (usec): min=15193, max=34965, avg=22517.66, stdev=3366.31 00:11:50.573 lat (usec): min=15205, max=38317, avg=22694.89, stdev=3404.08 00:11:50.573 clat percentiles (usec): 00:11:50.573 | 1.00th=[15270], 5.00th=[16909], 10.00th=[17957], 20.00th=[20055], 00:11:50.573 | 30.00th=[20579], 40.00th=[21365], 50.00th=[22414], 60.00th=[23200], 00:11:50.573 | 70.00th=[24249], 80.00th=[25035], 90.00th=[26870], 95.00th=[28443], 00:11:50.573 | 99.00th=[31851], 99.50th=[32375], 99.90th=[34866], 99.95th=[34866], 00:11:50.573 | 99.99th=[34866] 00:11:50.573 write: IOPS=2980, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1005msec); 0 zone resets 00:11:50.573 slat (usec): min=4, max=6183, avg=174.49, stdev=640.19 00:11:50.573 clat (usec): min=4488, max=35548, avg=23059.48, stdev=3996.11 00:11:50.573 lat (usec): min=4956, max=35568, avg=23233.97, stdev=4003.22 00:11:50.573 clat percentiles (usec): 00:11:50.573 | 1.00th=[10814], 5.00th=[17957], 10.00th=[18744], 20.00th=[20055], 00:11:50.574 | 30.00th=[21365], 40.00th=[22414], 50.00th=[23200], 60.00th=[23725], 00:11:50.574 | 70.00th=[23987], 80.00th=[25297], 90.00th=[27657], 95.00th=[30540], 00:11:50.574 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:11:50.574 | 99.99th=[35390] 00:11:50.574 bw ( KiB/s): min=10656, max=12263, per=17.42%, avg=11459.50, stdev=1136.32, samples=2 00:11:50.574 iops : min= 2664, max= 3065, avg=2864.50, stdev=283.55, samples=2 00:11:50.574 lat (msec) : 10=0.45%, 20=19.03%, 50=80.52% 00:11:50.574 cpu : usr=2.19%, sys=8.76%, ctx=593, majf=0, minf=9 00:11:50.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:50.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.574 issued rwts: total=2560,2995,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.574 job1: (groupid=0, jobs=1): err= 0: pid=71728: Thu Jul 25 05:17:54 2024 00:11:50.574 read: IOPS=5349, BW=20.9MiB/s (21.9MB/s)(20.9MiB/1002msec) 00:11:50.574 slat (usec): min=7, max=3190, avg=89.74, stdev=412.74 00:11:50.574 clat (usec): min=332, max=14227, avg=11716.06, stdev=1246.05 00:11:50.574 lat (usec): min=2581, max=15088, avg=11805.81, stdev=1194.05 00:11:50.574 clat percentiles (usec): 00:11:50.574 | 1.00th=[ 5604], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[11207], 00:11:50.574 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[11994], 00:11:50.574 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13304], 00:11:50.574 | 99.00th=[13960], 99.50th=[14091], 99.90th=[14222], 99.95th=[14222], 00:11:50.574 | 99.99th=[14222] 00:11:50.574 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:11:50.574 slat (usec): min=9, max=2842, avg=84.81, stdev=347.47 00:11:50.574 clat (usec): min=8683, max=14173, avg=11322.70, stdev=1210.22 00:11:50.574 lat (usec): min=8700, max=14201, avg=11407.51, stdev=1207.14 00:11:50.574 clat percentiles (usec): 00:11:50.574 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9765], 20.00th=[10028], 00:11:50.574 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11469], 60.00th=[11863], 00:11:50.574 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12911], 95.00th=[13173], 00:11:50.574 | 99.00th=[13566], 99.50th=[13960], 99.90th=[14091], 99.95th=[14222], 00:11:50.574 | 99.99th=[14222] 00:11:50.574 bw ( KiB/s): min=20736, max=20736, per=31.52%, avg=20736.00, stdev= 0.00, samples=1 00:11:50.574 iops : min= 5184, max= 5184, avg=5184.00, stdev= 0.00, samples=1 00:11:50.574 lat (usec) : 500=0.01% 00:11:50.574 lat (msec) : 4=0.29%, 10=12.05%, 20=87.65% 00:11:50.574 cpu : usr=4.50%, sys=15.18%, ctx=559, majf=0, minf=5 00:11:50.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:50.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.574 issued rwts: total=5360,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.574 job2: (groupid=0, jobs=1): err= 0: pid=71729: Thu Jul 25 05:17:54 2024 00:11:50.574 read: IOPS=4610, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1003msec) 00:11:50.574 slat (usec): min=6, max=5132, avg=100.91, stdev=494.55 00:11:50.574 clat (usec): min=558, max=18445, avg=13287.75, stdev=1282.63 00:11:50.574 lat (usec): min=3871, max=18474, avg=13388.66, stdev=1317.77 00:11:50.574 clat percentiles (usec): 00:11:50.574 | 1.00th=[10290], 5.00th=[11207], 10.00th=[11863], 20.00th=[12518], 00:11:50.574 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13173], 60.00th=[13566], 00:11:50.574 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14615], 95.00th=[15008], 00:11:50.574 | 99.00th=[16450], 99.50th=[17433], 99.90th=[17957], 99.95th=[18220], 00:11:50.574 | 99.99th=[18482] 00:11:50.574 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:11:50.574 slat (usec): min=10, max=3981, avg=97.18, stdev=464.70 00:11:50.574 clat (usec): min=4209, max=17572, avg=12751.99, stdev=1420.27 00:11:50.574 lat (usec): min=4232, max=17775, avg=12849.17, stdev=1409.44 00:11:50.574 clat percentiles (usec): 00:11:50.574 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[12256], 00:11:50.574 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13173], 00:11:50.574 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14091], 95.00th=[14484], 00:11:50.574 | 99.00th=[15139], 99.50th=[15795], 99.90th=[17171], 99.95th=[17433], 00:11:50.574 | 99.99th=[17695] 00:11:50.574 bw ( KiB/s): min=19592, max=20439, per=30.42%, avg=20015.50, stdev=598.92, samples=2 00:11:50.574 iops : min= 4898, max= 5109, avg=5003.50, stdev=149.20, samples=2 00:11:50.574 lat (usec) : 750=0.01% 00:11:50.574 lat (msec) : 4=0.06%, 10=3.49%, 20=96.44% 00:11:50.574 cpu : usr=4.19%, sys=13.17%, ctx=413, majf=0, minf=10 00:11:50.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:50.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.574 issued rwts: total=4624,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.574 job3: (groupid=0, jobs=1): err= 0: pid=71730: Thu Jul 25 05:17:54 2024 00:11:50.574 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:11:50.574 slat (usec): min=2, max=7744, avg=186.37, stdev=812.27 00:11:50.574 clat (usec): min=14072, max=30864, avg=23213.26, stdev=3100.72 00:11:50.574 lat (usec): min=15575, max=30887, avg=23399.62, stdev=3047.48 00:11:50.574 clat percentiles (usec): 00:11:50.574 | 1.00th=[16581], 5.00th=[18482], 10.00th=[19006], 20.00th=[20579], 00:11:50.574 | 30.00th=[21365], 40.00th=[22414], 50.00th=[23462], 60.00th=[23987], 00:11:50.574 | 70.00th=[24511], 80.00th=[26084], 90.00th=[27395], 95.00th=[28443], 00:11:50.574 | 99.00th=[29230], 99.50th=[29492], 99.90th=[30802], 99.95th=[30802], 00:11:50.574 | 99.99th=[30802] 00:11:50.574 write: IOPS=2783, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1006msec); 0 zone resets 00:11:50.574 slat (usec): min=8, max=6022, avg=179.50, stdev=620.69 00:11:50.574 clat (usec): min=3560, max=40069, avg=23952.60, stdev=4117.51 00:11:50.574 lat (usec): min=6805, max=40096, avg=24132.10, stdev=4102.86 00:11:50.574 clat percentiles (usec): 00:11:50.574 | 1.00th=[10028], 5.00th=[18220], 10.00th=[20317], 20.00th=[21627], 00:11:50.574 | 30.00th=[22414], 40.00th=[23200], 50.00th=[23725], 60.00th=[23987], 00:11:50.574 | 70.00th=[24511], 80.00th=[26084], 90.00th=[28705], 95.00th=[31851], 00:11:50.574 | 99.00th=[36439], 99.50th=[36439], 99.90th=[40109], 99.95th=[40109], 00:11:50.574 | 99.99th=[40109] 00:11:50.574 bw ( KiB/s): min= 9864, max=11497, per=16.23%, avg=10680.50, stdev=1154.71, samples=2 00:11:50.574 iops : min= 2466, max= 2874, avg=2670.00, stdev=288.50, samples=2 00:11:50.574 lat (msec) : 4=0.02%, 10=0.47%, 20=12.72%, 50=86.79% 00:11:50.574 cpu : usr=2.09%, sys=8.26%, ctx=683, majf=0, minf=13 00:11:50.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:50.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.574 issued rwts: total=2560,2800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.574 00:11:50.574 Run status group 0 (all jobs): 00:11:50.574 READ: bw=58.6MiB/s (61.5MB/s), 9.94MiB/s-20.9MiB/s (10.4MB/s-21.9MB/s), io=59.0MiB (61.9MB), run=1002-1006msec 00:11:50.574 WRITE: bw=64.3MiB/s (67.4MB/s), 10.9MiB/s-22.0MiB/s (11.4MB/s-23.0MB/s), io=64.6MiB (67.8MB), run=1002-1006msec 00:11:50.574 00:11:50.574 Disk stats (read/write): 00:11:50.574 nvme0n1: ios=2162/2560, merge=0/0, ticks=13756/17800, in_queue=31556, util=86.17% 00:11:50.574 nvme0n2: ios=4629/4608, merge=0/0, ticks=12711/11465, in_queue=24176, util=87.75% 00:11:50.574 nvme0n3: ios=4096/4191, merge=0/0, ticks=16734/15327, in_queue=32061, util=88.52% 00:11:50.574 nvme0n4: ios=2048/2462, merge=0/0, ticks=11699/13675, in_queue=25374, util=89.56% 00:11:50.574 05:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:50.574 05:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=71749 00:11:50.574 05:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:50.574 05:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:50.574 [global] 00:11:50.574 thread=1 00:11:50.574 invalidate=1 00:11:50.574 rw=read 00:11:50.574 time_based=1 00:11:50.574 runtime=10 00:11:50.574 ioengine=libaio 00:11:50.574 direct=1 00:11:50.574 bs=4096 00:11:50.574 iodepth=1 00:11:50.574 norandommap=1 00:11:50.574 numjobs=1 00:11:50.574 00:11:50.574 [job0] 00:11:50.574 filename=/dev/nvme0n1 00:11:50.574 [job1] 00:11:50.574 filename=/dev/nvme0n2 00:11:50.574 [job2] 00:11:50.574 filename=/dev/nvme0n3 00:11:50.574 [job3] 00:11:50.574 filename=/dev/nvme0n4 00:11:50.574 Could not set queue depth (nvme0n1) 00:11:50.574 Could not set queue depth (nvme0n2) 00:11:50.574 Could not set queue depth (nvme0n3) 00:11:50.574 Could not set queue depth (nvme0n4) 00:11:50.574 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.574 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.574 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.574 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.574 fio-3.35 00:11:50.574 Starting 4 threads 00:11:53.855 05:17:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:53.855 fio: pid=71793, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:53.855 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=25632768, buflen=4096 00:11:53.855 05:17:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:53.855 fio: pid=71792, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:53.855 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=26177536, buflen=4096 00:11:54.113 05:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.113 05:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:54.113 fio: pid=71790, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:54.113 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=46112768, buflen=4096 00:11:54.371 05:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.371 05:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:54.629 fio: pid=71791, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:54.629 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=38719488, buflen=4096 00:11:54.629 05:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.629 05:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:54.629 00:11:54.629 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71790: Thu Jul 25 05:17:58 2024 00:11:54.629 read: IOPS=3226, BW=12.6MiB/s (13.2MB/s)(44.0MiB/3489msec) 00:11:54.629 slat (usec): min=13, max=11757, avg=26.26, stdev=186.00 00:11:54.629 clat (usec): min=5, max=7033, avg=281.45, stdev=118.67 00:11:54.629 lat (usec): min=148, max=11968, avg=307.71, stdev=221.34 00:11:54.629 clat percentiles (usec): 00:11:54.629 | 1.00th=[ 151], 5.00th=[ 167], 10.00th=[ 186], 20.00th=[ 225], 00:11:54.629 | 30.00th=[ 233], 40.00th=[ 249], 50.00th=[ 273], 60.00th=[ 285], 00:11:54.629 | 70.00th=[ 306], 80.00th=[ 334], 90.00th=[ 379], 95.00th=[ 408], 00:11:54.629 | 99.00th=[ 523], 99.50th=[ 603], 99.90th=[ 1205], 99.95th=[ 1663], 00:11:54.629 | 99.99th=[ 4293] 00:11:54.629 bw ( KiB/s): min=11472, max=13800, per=37.36%, avg=12942.67, stdev=852.48, samples=6 00:11:54.629 iops : min= 2868, max= 3450, avg=3235.67, stdev=213.12, samples=6 00:11:54.629 lat (usec) : 10=0.01%, 250=40.31%, 500=58.34%, 750=1.07%, 1000=0.13% 00:11:54.629 lat (msec) : 2=0.09%, 4=0.03%, 10=0.02% 00:11:54.629 cpu : usr=1.09%, sys=6.28%, ctx=11265, majf=0, minf=1 00:11:54.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.629 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.629 issued rwts: total=11259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.629 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71791: Thu Jul 25 05:17:58 2024 00:11:54.629 read: IOPS=2454, BW=9816KiB/s (10.1MB/s)(36.9MiB/3852msec) 00:11:54.629 slat (usec): min=7, max=17345, avg=31.72, stdev=266.69 00:11:54.629 clat (usec): min=127, max=3469, avg=373.54, stdev=182.88 00:11:54.629 lat (usec): min=141, max=17677, avg=405.26, stdev=324.46 00:11:54.629 clat percentiles (usec): 00:11:54.629 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 200], 00:11:54.629 | 30.00th=[ 235], 40.00th=[ 273], 50.00th=[ 396], 60.00th=[ 478], 00:11:54.629 | 70.00th=[ 506], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 578], 00:11:54.629 | 99.00th=[ 766], 99.50th=[ 840], 99.90th=[ 1811], 99.95th=[ 2868], 00:11:54.629 | 99.99th=[ 3458] 00:11:54.629 bw ( KiB/s): min= 7080, max=12912, per=25.44%, avg=8814.29, stdev=2666.59, samples=7 00:11:54.629 iops : min= 1770, max= 3228, avg=2203.57, stdev=666.65, samples=7 00:11:54.629 lat (usec) : 250=36.16%, 500=30.89%, 750=31.76%, 1000=0.98% 00:11:54.629 lat (msec) : 2=0.11%, 4=0.08% 00:11:54.629 cpu : usr=1.01%, sys=4.49%, ctx=11482, majf=0, minf=1 00:11:54.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.629 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.629 issued rwts: total=9454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.629 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71792: Thu Jul 25 05:17:58 2024 00:11:54.629 read: IOPS=1967, BW=7868KiB/s (8057kB/s)(25.0MiB/3249msec) 00:11:54.629 slat (usec): min=8, max=13731, avg=28.86, stdev=208.81 00:11:54.629 clat (usec): min=159, max=3093, avg=477.04, stdev=132.30 00:11:54.629 lat (usec): min=184, max=13973, avg=505.90, stdev=244.31 00:11:54.629 clat percentiles (usec): 00:11:54.629 | 1.00th=[ 239], 5.00th=[ 277], 10.00th=[ 310], 20.00th=[ 375], 00:11:54.629 | 30.00th=[ 437], 40.00th=[ 486], 50.00th=[ 506], 60.00th=[ 519], 00:11:54.629 | 70.00th=[ 529], 80.00th=[ 545], 90.00th=[ 570], 95.00th=[ 611], 00:11:54.629 | 99.00th=[ 824], 99.50th=[ 898], 99.90th=[ 1991], 99.95th=[ 2245], 00:11:54.629 | 99.99th=[ 3097] 00:11:54.629 bw ( KiB/s): min= 7080, max= 9656, per=22.43%, avg=7769.33, stdev=986.10, samples=6 00:11:54.629 iops : min= 1770, max= 2414, avg=1942.33, stdev=246.53, samples=6 00:11:54.629 lat (usec) : 250=2.00%, 500=44.79%, 750=51.60%, 1000=1.33% 00:11:54.629 lat (msec) : 2=0.17%, 4=0.09% 00:11:54.629 cpu : usr=0.92%, sys=3.73%, ctx=8167, majf=0, minf=1 00:11:54.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.630 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.630 issued rwts: total=6392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.630 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71793: Thu Jul 25 05:17:58 2024 00:11:54.630 read: IOPS=2111, BW=8445KiB/s (8648kB/s)(24.4MiB/2964msec) 00:11:54.630 slat (usec): min=7, max=196, avg=29.06, stdev=19.09 00:11:54.630 clat (usec): min=153, max=7900, avg=441.95, stdev=207.46 00:11:54.630 lat (usec): min=176, max=7932, avg=471.02, stdev=208.04 00:11:54.630 clat percentiles (usec): 00:11:54.630 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 208], 00:11:54.630 | 30.00th=[ 420], 40.00th=[ 478], 50.00th=[ 502], 60.00th=[ 515], 00:11:54.630 | 70.00th=[ 529], 80.00th=[ 545], 90.00th=[ 570], 95.00th=[ 603], 00:11:54.630 | 99.00th=[ 799], 99.50th=[ 873], 99.90th=[ 2966], 99.95th=[ 3720], 00:11:54.630 | 99.99th=[ 7898] 00:11:54.630 bw ( KiB/s): min= 7064, max=14232, per=24.91%, avg=8630.40, stdev=3134.09, samples=5 00:11:54.630 iops : min= 1766, max= 3558, avg=2157.60, stdev=783.52, samples=5 00:11:54.630 lat (usec) : 250=23.42%, 500=26.57%, 750=48.41%, 1000=1.39% 00:11:54.630 lat (msec) : 2=0.06%, 4=0.10%, 10=0.03% 00:11:54.630 cpu : usr=1.08%, sys=4.39%, ctx=8104, majf=0, minf=1 00:11:54.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.630 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.630 issued rwts: total=6259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.630 00:11:54.630 Run status group 0 (all jobs): 00:11:54.630 READ: bw=33.8MiB/s (35.5MB/s), 7868KiB/s-12.6MiB/s (8057kB/s-13.2MB/s), io=130MiB (137MB), run=2964-3852msec 00:11:54.630 00:11:54.630 Disk stats (read/write): 00:11:54.630 nvme0n1: ios=10805/0, merge=0/0, ticks=3125/0, in_queue=3125, util=94.99% 00:11:54.630 nvme0n2: ios=8044/0, merge=0/0, ticks=3267/0, in_queue=3267, util=95.21% 00:11:54.630 nvme0n3: ios=6029/0, merge=0/0, ticks=2817/0, in_queue=2817, util=96.08% 00:11:54.630 nvme0n4: ios=6063/0, merge=0/0, ticks=2598/0, in_queue=2598, util=96.52% 00:11:54.889 05:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.889 05:17:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:55.147 05:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:55.147 05:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:55.405 05:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:55.405 05:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:55.672 05:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:55.672 05:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:55.930 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:55.930 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 71749 00:11:55.930 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:55.930 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.930 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:55.930 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:55.930 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:55.930 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.930 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:55.930 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.930 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:55.930 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:55.930 nvmf hotplug test: fio failed as expected 00:11:55.930 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:55.930 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:56.496 rmmod nvme_tcp 00:11:56.496 rmmod nvme_fabrics 00:11:56.496 rmmod nvme_keyring 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 71266 ']' 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 71266 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 71266 ']' 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 71266 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71266 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:56.496 killing process with pid 71266 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71266' 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 71266 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 71266 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.496 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:56.755 00:11:56.755 real 0m19.177s 00:11:56.755 user 1m15.379s 00:11:56.755 sys 0m7.649s 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.755 ************************************ 00:11:56.755 END TEST nvmf_fio_target 00:11:56.755 ************************************ 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:56.755 ************************************ 00:11:56.755 START TEST nvmf_bdevio 00:11:56.755 ************************************ 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:56.755 * Looking for test storage... 00:11:56.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.755 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:56.756 Cannot find device "nvmf_tgt_br" 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:56.756 Cannot find device "nvmf_tgt_br2" 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:56.756 Cannot find device "nvmf_tgt_br" 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:56.756 Cannot find device "nvmf_tgt_br2" 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:11:56.756 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:57.014 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:57.014 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:57.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:57.014 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:57.014 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:57.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:57.014 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:57.014 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:57.014 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:57.014 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:57.014 05:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:57.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:11:57.014 00:11:57.014 --- 10.0.0.2 ping statistics --- 00:11:57.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.014 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:57.014 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:57.014 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:11:57.014 00:11:57.014 --- 10.0.0.3 ping statistics --- 00:11:57.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.014 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:57.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:11:57.014 00:11:57.014 --- 10.0.0.1 ping statistics --- 00:11:57.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.014 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:57.014 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:57.273 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:57.273 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:57.273 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:57.273 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:57.273 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=72118 00:11:57.273 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:57.273 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 72118 00:11:57.273 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 72118 ']' 00:11:57.273 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.273 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.273 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.273 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.273 05:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:57.273 [2024-07-25 05:18:01.257639] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:11:57.273 [2024-07-25 05:18:01.257731] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.273 [2024-07-25 05:18:01.394919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.531 [2024-07-25 05:18:01.501125] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.531 [2024-07-25 05:18:01.501206] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.531 [2024-07-25 05:18:01.501225] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.531 [2024-07-25 05:18:01.501241] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.531 [2024-07-25 05:18:01.501254] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.531 [2024-07-25 05:18:01.501425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:57.531 [2024-07-25 05:18:01.501902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:57.531 [2024-07-25 05:18:01.502005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:57.531 [2024-07-25 05:18:01.502015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:58.467 [2024-07-25 05:18:02.324484] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:58.467 Malloc0 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:58.467 [2024-07-25 05:18:02.379434] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:58.467 { 00:11:58.467 "params": { 00:11:58.467 "name": "Nvme$subsystem", 00:11:58.467 "trtype": "$TEST_TRANSPORT", 00:11:58.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:58.467 "adrfam": "ipv4", 00:11:58.467 "trsvcid": "$NVMF_PORT", 00:11:58.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:58.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:58.467 "hdgst": ${hdgst:-false}, 00:11:58.467 "ddgst": ${ddgst:-false} 00:11:58.467 }, 00:11:58.467 "method": "bdev_nvme_attach_controller" 00:11:58.467 } 00:11:58.467 EOF 00:11:58.467 )") 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:58.467 05:18:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:58.467 "params": { 00:11:58.467 "name": "Nvme1", 00:11:58.467 "trtype": "tcp", 00:11:58.467 "traddr": "10.0.0.2", 00:11:58.467 "adrfam": "ipv4", 00:11:58.467 "trsvcid": "4420", 00:11:58.467 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:58.467 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:58.467 "hdgst": false, 00:11:58.467 "ddgst": false 00:11:58.467 }, 00:11:58.467 "method": "bdev_nvme_attach_controller" 00:11:58.467 }' 00:11:58.467 [2024-07-25 05:18:02.427405] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:11:58.467 [2024-07-25 05:18:02.427492] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72172 ] 00:11:58.467 [2024-07-25 05:18:02.592020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:58.726 [2024-07-25 05:18:02.681080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.726 [2024-07-25 05:18:02.681153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.726 [2024-07-25 05:18:02.681158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.726 I/O targets: 00:11:58.726 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:58.726 00:11:58.726 00:11:58.726 CUnit - A unit testing framework for C - Version 2.1-3 00:11:58.726 http://cunit.sourceforge.net/ 00:11:58.726 00:11:58.726 00:11:58.726 Suite: bdevio tests on: Nvme1n1 00:11:58.726 Test: blockdev write read block ...passed 00:11:58.985 Test: blockdev write zeroes read block ...passed 00:11:58.985 Test: blockdev write zeroes read no split ...passed 00:11:58.985 Test: blockdev write zeroes read split ...passed 00:11:58.985 Test: blockdev write zeroes read split partial ...passed 00:11:58.985 Test: blockdev reset ...[2024-07-25 05:18:02.957370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:58.985 [2024-07-25 05:18:02.957533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e7180 (9): Bad file descriptor 00:11:58.985 [2024-07-25 05:18:02.970187] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:58.985 passed 00:11:58.985 Test: blockdev write read 8 blocks ...passed 00:11:58.985 Test: blockdev write read size > 128k ...passed 00:11:58.985 Test: blockdev write read invalid size ...passed 00:11:58.985 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:58.985 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:58.985 Test: blockdev write read max offset ...passed 00:11:58.985 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:58.985 Test: blockdev writev readv 8 blocks ...passed 00:11:58.985 Test: blockdev writev readv 30 x 1block ...passed 00:11:58.985 Test: blockdev writev readv block ...passed 00:11:58.985 Test: blockdev writev readv size > 128k ...passed 00:11:58.985 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:58.985 Test: blockdev comparev and writev ...[2024-07-25 05:18:03.146377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:58.985 [2024-07-25 05:18:03.146431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:58.985 [2024-07-25 05:18:03.146464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:58.985 [2024-07-25 05:18:03.146484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:58.985 [2024-07-25 05:18:03.147011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:58.985 [2024-07-25 05:18:03.147030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:58.985 [2024-07-25 05:18:03.147047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:58.985 [2024-07-25 05:18:03.147057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:58.985 [2024-07-25 05:18:03.147536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:58.985 [2024-07-25 05:18:03.147553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:58.985 [2024-07-25 05:18:03.147570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:58.985 [2024-07-25 05:18:03.147580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:58.985 [2024-07-25 05:18:03.148061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:58.985 [2024-07-25 05:18:03.148084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:58.985 [2024-07-25 05:18:03.148102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:58.985 [2024-07-25 05:18:03.148112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:59.243 passed 00:11:59.243 Test: blockdev nvme passthru rw ...passed 00:11:59.243 Test: blockdev nvme passthru vendor specific ...[2024-07-25 05:18:03.232416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:59.243 [2024-07-25 05:18:03.232486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:59.243 [2024-07-25 05:18:03.232691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:59.243 [2024-07-25 05:18:03.232721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:59.243 [2024-07-25 05:18:03.232926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:59.243 [2024-07-25 05:18:03.232948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:59.243 [2024-07-25 05:18:03.233165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:59.243 [2024-07-25 05:18:03.233193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:59.243 passed 00:11:59.243 Test: blockdev nvme admin passthru ...passed 00:11:59.243 Test: blockdev copy ...passed 00:11:59.243 00:11:59.243 Run Summary: Type Total Ran Passed Failed Inactive 00:11:59.243 suites 1 1 n/a 0 0 00:11:59.243 tests 23 23 23 0 0 00:11:59.243 asserts 152 152 152 0 n/a 00:11:59.243 00:11:59.243 Elapsed time = 0.896 seconds 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:59.501 rmmod nvme_tcp 00:11:59.501 rmmod nvme_fabrics 00:11:59.501 rmmod nvme_keyring 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 72118 ']' 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 72118 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 72118 ']' 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 72118 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72118 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:59.501 killing process with pid 72118 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72118' 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 72118 00:11:59.501 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 72118 00:11:59.758 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:59.758 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:59.758 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:59.759 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:59.759 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:59.759 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.759 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.759 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.759 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:59.759 00:11:59.759 real 0m3.103s 00:11:59.759 user 0m11.224s 00:11:59.759 sys 0m0.701s 00:11:59.759 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.759 05:18:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:59.759 ************************************ 00:11:59.759 END TEST nvmf_bdevio 00:11:59.759 ************************************ 00:11:59.759 05:18:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:59.759 00:11:59.759 real 3m29.877s 00:11:59.759 user 11m26.401s 00:11:59.759 sys 0m58.352s 00:11:59.759 05:18:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.759 ************************************ 00:11:59.759 END TEST nvmf_target_core 00:11:59.759 ************************************ 00:11:59.759 05:18:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:59.759 05:18:03 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:59.759 05:18:03 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:59.759 05:18:03 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.759 05:18:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:59.759 ************************************ 00:11:59.759 START TEST nvmf_target_extra 00:11:59.759 ************************************ 00:11:59.759 05:18:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:00.018 * Looking for test storage... 00:12:00.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.018 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:00.019 05:18:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:00.019 ************************************ 00:12:00.019 START TEST nvmf_example 00:12:00.019 ************************************ 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:00.019 * Looking for test storage... 00:12:00.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.019 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:00.020 Cannot find device "nvmf_tgt_br" 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # true 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:00.020 Cannot find device "nvmf_tgt_br2" 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # true 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:00.020 Cannot find device "nvmf_tgt_br" 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # true 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:00.020 Cannot find device "nvmf_tgt_br2" 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # true 00:12:00.020 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:00.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:00.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:00.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:12:00.278 00:12:00.278 --- 10.0.0.2 ping statistics --- 00:12:00.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.278 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:00.278 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:00.278 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:12:00.278 00:12:00.278 --- 10.0.0.3 ping statistics --- 00:12:00.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.278 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:00.278 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:00.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:12:00.279 00:12:00.279 --- 10.0.0.1 ping statistics --- 00:12:00.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.279 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:00.279 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.279 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:12:00.279 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:00.279 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.279 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:00.279 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:00.279 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.279 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:00.279 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:00.537 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:00.537 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:00.537 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:00.537 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:00.537 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:00.537 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:00.537 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=72399 00:12:00.537 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:00.537 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 72399 00:12:00.537 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:00.537 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 72399 ']' 00:12:00.537 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.537 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:00.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.537 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.537 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:00.537 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.470 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.728 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.728 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.728 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.728 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.728 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.728 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:12:01.728 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:11.719 Initializing NVMe Controllers 00:12:11.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:11.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:11.719 Initialization complete. Launching workers. 00:12:11.719 ======================================================== 00:12:11.720 Latency(us) 00:12:11.720 Device Information : IOPS MiB/s Average min max 00:12:11.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14854.98 58.03 4307.90 838.07 24102.96 00:12:11.720 ======================================================== 00:12:11.720 Total : 14854.98 58.03 4307.90 838.07 24102.96 00:12:11.720 00:12:11.978 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:11.978 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:11.978 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:11.979 rmmod nvme_tcp 00:12:11.979 rmmod nvme_fabrics 00:12:11.979 rmmod nvme_keyring 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 72399 ']' 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 72399 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 72399 ']' 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 72399 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72399 00:12:11.979 killing process with pid 72399 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72399' 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 72399 00:12:11.979 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 72399 00:12:11.979 nvmf threads initialize successfully 00:12:11.979 bdev subsystem init successfully 00:12:11.979 created a nvmf target service 00:12:11.979 create targets's poll groups done 00:12:11.979 all subsystems of target started 00:12:11.979 nvmf target is running 00:12:11.979 all subsystems of target stopped 00:12:11.979 destroy targets's poll groups done 00:12:11.979 destroyed the nvmf target service 00:12:11.979 bdev subsystem finish successfully 00:12:11.979 nvmf threads destroy successfully 00:12:11.979 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:11.979 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:11.979 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:11.979 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.979 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:11.979 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.979 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.979 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:12.240 00:12:12.240 real 0m12.200s 00:12:12.240 user 0m44.140s 00:12:12.240 sys 0m1.870s 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:12.240 ************************************ 00:12:12.240 END TEST nvmf_example 00:12:12.240 ************************************ 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.240 ************************************ 00:12:12.240 START TEST nvmf_filesystem 00:12:12.240 ************************************ 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:12.240 * Looking for test storage... 00:12:12.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:12:12.240 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:12.241 #define SPDK_CONFIG_H 00:12:12.241 #define SPDK_CONFIG_APPS 1 00:12:12.241 #define SPDK_CONFIG_ARCH native 00:12:12.241 #undef SPDK_CONFIG_ASAN 00:12:12.241 #define SPDK_CONFIG_AVAHI 1 00:12:12.241 #undef SPDK_CONFIG_CET 00:12:12.241 #define SPDK_CONFIG_COVERAGE 1 00:12:12.241 #define SPDK_CONFIG_CROSS_PREFIX 00:12:12.241 #undef SPDK_CONFIG_CRYPTO 00:12:12.241 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:12.241 #undef SPDK_CONFIG_CUSTOMOCF 00:12:12.241 #undef SPDK_CONFIG_DAOS 00:12:12.241 #define SPDK_CONFIG_DAOS_DIR 00:12:12.241 #define SPDK_CONFIG_DEBUG 1 00:12:12.241 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:12.241 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:12.241 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:12.241 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:12.241 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:12.241 #undef SPDK_CONFIG_DPDK_UADK 00:12:12.241 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:12.241 #define SPDK_CONFIG_EXAMPLES 1 00:12:12.241 #undef SPDK_CONFIG_FC 00:12:12.241 #define SPDK_CONFIG_FC_PATH 00:12:12.241 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:12.241 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:12.241 #undef SPDK_CONFIG_FUSE 00:12:12.241 #undef SPDK_CONFIG_FUZZER 00:12:12.241 #define SPDK_CONFIG_FUZZER_LIB 00:12:12.241 #define SPDK_CONFIG_GOLANG 1 00:12:12.241 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:12.241 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:12.241 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:12.241 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:12.241 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:12.241 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:12.241 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:12.241 #define SPDK_CONFIG_IDXD 1 00:12:12.241 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:12.241 #undef SPDK_CONFIG_IPSEC_MB 00:12:12.241 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:12.241 #define SPDK_CONFIG_ISAL 1 00:12:12.241 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:12.241 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:12.241 #define SPDK_CONFIG_LIBDIR 00:12:12.241 #undef SPDK_CONFIG_LTO 00:12:12.241 #define SPDK_CONFIG_MAX_LCORES 128 00:12:12.241 #define SPDK_CONFIG_NVME_CUSE 1 00:12:12.241 #undef SPDK_CONFIG_OCF 00:12:12.241 #define SPDK_CONFIG_OCF_PATH 00:12:12.241 #define SPDK_CONFIG_OPENSSL_PATH 00:12:12.241 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:12.241 #define SPDK_CONFIG_PGO_DIR 00:12:12.241 #undef SPDK_CONFIG_PGO_USE 00:12:12.241 #define SPDK_CONFIG_PREFIX /usr/local 00:12:12.241 #undef SPDK_CONFIG_RAID5F 00:12:12.241 #undef SPDK_CONFIG_RBD 00:12:12.241 #define SPDK_CONFIG_RDMA 1 00:12:12.241 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:12.241 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:12.241 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:12.241 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:12.241 #define SPDK_CONFIG_SHARED 1 00:12:12.241 #undef SPDK_CONFIG_SMA 00:12:12.241 #define SPDK_CONFIG_TESTS 1 00:12:12.241 #undef SPDK_CONFIG_TSAN 00:12:12.241 #define SPDK_CONFIG_UBLK 1 00:12:12.241 #define SPDK_CONFIG_UBSAN 1 00:12:12.241 #undef SPDK_CONFIG_UNIT_TESTS 00:12:12.241 #undef SPDK_CONFIG_URING 00:12:12.241 #define SPDK_CONFIG_URING_PATH 00:12:12.241 #undef SPDK_CONFIG_URING_ZNS 00:12:12.241 #define SPDK_CONFIG_USDT 1 00:12:12.241 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:12.241 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:12.241 #undef SPDK_CONFIG_VFIO_USER 00:12:12.241 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:12.241 #define SPDK_CONFIG_VHOST 1 00:12:12.241 #define SPDK_CONFIG_VIRTIO 1 00:12:12.241 #undef SPDK_CONFIG_VTUNE 00:12:12.241 #define SPDK_CONFIG_VTUNE_DIR 00:12:12.241 #define SPDK_CONFIG_WERROR 1 00:12:12.241 #define SPDK_CONFIG_WPDK_DIR 00:12:12.241 #undef SPDK_CONFIG_XNVME 00:12:12.241 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.241 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:12:12.242 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:12:12.243 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j10 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 72639 ]] 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 72639 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.Y6szVN 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.Y6szVN/tests/target /tmp/spdk.Y6szVN 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=devtmpfs 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=4194304 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=4194304 00:12:12.244 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6257971200 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6267891712 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9920512 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=2487009280 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=2507157504 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=20148224 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda5 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=btrfs 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=13784756224 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=20314062848 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5245423616 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda5 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=btrfs 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=13784756224 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=20314062848 00:12:12.503 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5245423616 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda2 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext4 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=843546624 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=1012768768 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=100016128 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6267756544 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6267891712 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=135168 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda3 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=vfat 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=92499968 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=104607744 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=12107776 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=1253572608 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=1253576704 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=fuse.sshfs 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=95151869952 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=105088212992 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4550909952 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:12:12.504 * Looking for test storage... 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/home 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=13784756224 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ btrfs == tmpfs ]] 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ btrfs == ramfs ]] 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ /home == / ]] 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:12.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.504 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:12.505 Cannot find device "nvmf_tgt_br" 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:12.505 Cannot find device "nvmf_tgt_br2" 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:12.505 Cannot find device "nvmf_tgt_br" 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:12.505 Cannot find device "nvmf_tgt_br2" 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:12.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:12.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:12.505 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:12.764 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:12.764 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:12.764 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:12.764 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:12.764 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:12.764 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:12.764 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:12.764 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:12.764 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:12.764 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:12.764 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:12.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:12:12.764 00:12:12.764 --- 10.0.0.2 ping statistics --- 00:12:12.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.764 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:12:12.764 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:12.764 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:12.764 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:12:12.764 00:12:12.764 --- 10.0.0.3 ping statistics --- 00:12:12.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.764 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:12.764 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:12.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:12:12.764 00:12:12.764 --- 10.0.0.1 ping statistics --- 00:12:12.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.764 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:12.764 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.764 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.765 ************************************ 00:12:12.765 START TEST nvmf_filesystem_no_in_capsule 00:12:12.765 ************************************ 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=72792 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 72792 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 72792 ']' 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:12.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:12.765 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.765 [2024-07-25 05:18:16.864312] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:12:12.765 [2024-07-25 05:18:16.864411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.023 [2024-07-25 05:18:17.000390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.023 [2024-07-25 05:18:17.089435] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.023 [2024-07-25 05:18:17.089514] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.023 [2024-07-25 05:18:17.089538] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.024 [2024-07-25 05:18:17.089554] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.024 [2024-07-25 05:18:17.089566] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.024 [2024-07-25 05:18:17.089676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.024 [2024-07-25 05:18:17.089794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.024 [2024-07-25 05:18:17.090236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.024 [2024-07-25 05:18:17.090260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.024 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:13.024 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:13.024 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:13.024 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:13.024 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.282 [2024-07-25 05:18:17.226541] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.282 Malloc1 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.282 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.283 [2024-07-25 05:18:17.359188] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.283 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.283 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:13.283 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:13.283 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:13.283 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:13.283 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:13.283 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:13.283 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.283 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.283 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.283 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:13.283 { 00:12:13.283 "aliases": [ 00:12:13.283 "bb73702e-b7fa-41b0-b6bb-6c7f9a76b0a3" 00:12:13.283 ], 00:12:13.283 "assigned_rate_limits": { 00:12:13.283 "r_mbytes_per_sec": 0, 00:12:13.283 "rw_ios_per_sec": 0, 00:12:13.283 "rw_mbytes_per_sec": 0, 00:12:13.283 "w_mbytes_per_sec": 0 00:12:13.283 }, 00:12:13.283 "block_size": 512, 00:12:13.283 "claim_type": "exclusive_write", 00:12:13.283 "claimed": true, 00:12:13.283 "driver_specific": {}, 00:12:13.283 "memory_domains": [ 00:12:13.283 { 00:12:13.283 "dma_device_id": "system", 00:12:13.283 "dma_device_type": 1 00:12:13.283 }, 00:12:13.283 { 00:12:13.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.283 "dma_device_type": 2 00:12:13.283 } 00:12:13.283 ], 00:12:13.283 "name": "Malloc1", 00:12:13.283 "num_blocks": 1048576, 00:12:13.283 "product_name": "Malloc disk", 00:12:13.283 "supported_io_types": { 00:12:13.283 "abort": true, 00:12:13.283 "compare": false, 00:12:13.283 "compare_and_write": false, 00:12:13.283 "copy": true, 00:12:13.283 "flush": true, 00:12:13.283 "get_zone_info": false, 00:12:13.283 "nvme_admin": false, 00:12:13.283 "nvme_io": false, 00:12:13.283 "nvme_io_md": false, 00:12:13.283 "nvme_iov_md": false, 00:12:13.283 "read": true, 00:12:13.283 "reset": true, 00:12:13.283 "seek_data": false, 00:12:13.283 "seek_hole": false, 00:12:13.283 "unmap": true, 00:12:13.283 "write": true, 00:12:13.283 "write_zeroes": true, 00:12:13.283 "zcopy": true, 00:12:13.283 "zone_append": false, 00:12:13.283 "zone_management": false 00:12:13.283 }, 00:12:13.283 "uuid": "bb73702e-b7fa-41b0-b6bb-6c7f9a76b0a3", 00:12:13.283 "zoned": false 00:12:13.283 } 00:12:13.283 ]' 00:12:13.283 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:13.283 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:13.283 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:13.542 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:13.542 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:13.542 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:13.542 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:13.542 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.542 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:13.542 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:13.542 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.542 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:13.542 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:16.076 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:16.660 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:16.660 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:16.660 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:16.660 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.660 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.918 ************************************ 00:12:16.918 START TEST filesystem_ext4 00:12:16.918 ************************************ 00:12:16.918 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:16.918 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:16.918 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:16.918 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:16.918 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:16.918 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:16.918 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:16.918 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:16.918 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:16.918 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:16.918 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:16.918 mke2fs 1.46.5 (30-Dec-2021) 00:12:16.918 Discarding device blocks: 0/522240 done 00:12:16.918 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:16.918 Filesystem UUID: 4bebe1ac-4a05-4937-a206-617b9de97465 00:12:16.918 Superblock backups stored on blocks: 00:12:16.918 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:16.918 00:12:16.918 Allocating group tables: 0/64 done 00:12:16.918 Writing inode tables: 0/64 done 00:12:16.918 Creating journal (8192 blocks): done 00:12:16.918 Writing superblocks and filesystem accounting information: 0/64 done 00:12:16.918 00:12:16.918 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:16.918 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:16.918 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:16.918 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:16.918 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:16.918 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:16.918 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:16.918 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:16.918 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 72792 00:12:16.918 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:16.918 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:17.176 00:12:17.176 real 0m0.264s 00:12:17.176 user 0m0.019s 00:12:17.176 sys 0m0.053s 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:17.176 ************************************ 00:12:17.176 END TEST filesystem_ext4 00:12:17.176 ************************************ 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.176 ************************************ 00:12:17.176 START TEST filesystem_btrfs 00:12:17.176 ************************************ 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:17.176 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:17.176 btrfs-progs v6.6.2 00:12:17.176 See https://btrfs.readthedocs.io for more information. 00:12:17.176 00:12:17.176 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:17.176 NOTE: several default settings have changed in version 5.15, please make sure 00:12:17.176 this does not affect your deployments: 00:12:17.176 - DUP for metadata (-m dup) 00:12:17.176 - enabled no-holes (-O no-holes) 00:12:17.176 - enabled free-space-tree (-R free-space-tree) 00:12:17.176 00:12:17.176 Label: (null) 00:12:17.176 UUID: 1ab38296-5332-4bf0-aea5-72263554e0f2 00:12:17.176 Node size: 16384 00:12:17.176 Sector size: 4096 00:12:17.176 Filesystem size: 510.00MiB 00:12:17.177 Block group profiles: 00:12:17.177 Data: single 8.00MiB 00:12:17.177 Metadata: DUP 32.00MiB 00:12:17.177 System: DUP 8.00MiB 00:12:17.177 SSD detected: yes 00:12:17.177 Zoned device: no 00:12:17.177 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:17.177 Runtime features: free-space-tree 00:12:17.177 Checksum: crc32c 00:12:17.177 Number of devices: 1 00:12:17.177 Devices: 00:12:17.177 ID SIZE PATH 00:12:17.177 1 510.00MiB /dev/nvme0n1p1 00:12:17.177 00:12:17.177 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:17.177 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:17.177 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:17.177 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:17.177 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:17.177 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:17.177 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:17.177 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 72792 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:17.435 00:12:17.435 real 0m0.228s 00:12:17.435 user 0m0.023s 00:12:17.435 sys 0m0.057s 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:17.435 ************************************ 00:12:17.435 END TEST filesystem_btrfs 00:12:17.435 ************************************ 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.435 ************************************ 00:12:17.435 START TEST filesystem_xfs 00:12:17.435 ************************************ 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:17.435 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:17.435 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:17.435 = sectsz=512 attr=2, projid32bit=1 00:12:17.435 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:17.435 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:17.435 data = bsize=4096 blocks=130560, imaxpct=25 00:12:17.435 = sunit=0 swidth=0 blks 00:12:17.435 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:17.435 log =internal log bsize=4096 blocks=16384, version=2 00:12:17.435 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:17.435 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:18.000 Discarding blocks...Done. 00:12:18.000 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:18.000 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 72792 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:20.527 00:12:20.527 real 0m3.102s 00:12:20.527 user 0m0.017s 00:12:20.527 sys 0m0.050s 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.527 ************************************ 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:20.527 END TEST filesystem_xfs 00:12:20.527 ************************************ 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.527 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 72792 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 72792 ']' 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 72792 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72792 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:20.845 killing process with pid 72792 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72792' 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 72792 00:12:20.845 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 72792 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:21.103 00:12:21.103 real 0m8.233s 00:12:21.103 user 0m30.782s 00:12:21.103 sys 0m1.519s 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.103 ************************************ 00:12:21.103 END TEST nvmf_filesystem_no_in_capsule 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.103 ************************************ 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:21.103 ************************************ 00:12:21.103 START TEST nvmf_filesystem_in_capsule 00:12:21.103 ************************************ 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=73090 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 73090 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 73090 ']' 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:21.103 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.103 [2024-07-25 05:18:25.164611] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:12:21.103 [2024-07-25 05:18:25.164737] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.362 [2024-07-25 05:18:25.308344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.362 [2024-07-25 05:18:25.369525] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.362 [2024-07-25 05:18:25.369767] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.362 [2024-07-25 05:18:25.369929] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.362 [2024-07-25 05:18:25.370099] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.362 [2024-07-25 05:18:25.370137] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.362 [2024-07-25 05:18:25.370300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.362 [2024-07-25 05:18:25.370427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.362 [2024-07-25 05:18:25.370975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.362 [2024-07-25 05:18:25.370989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.295 [2024-07-25 05:18:26.222256] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.295 Malloc1 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.295 [2024-07-25 05:18:26.355734] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:22.295 { 00:12:22.295 "aliases": [ 00:12:22.295 "ff6bf163-9afd-4a8b-913f-f2579dc1e8d5" 00:12:22.295 ], 00:12:22.295 "assigned_rate_limits": { 00:12:22.295 "r_mbytes_per_sec": 0, 00:12:22.295 "rw_ios_per_sec": 0, 00:12:22.295 "rw_mbytes_per_sec": 0, 00:12:22.295 "w_mbytes_per_sec": 0 00:12:22.295 }, 00:12:22.295 "block_size": 512, 00:12:22.295 "claim_type": "exclusive_write", 00:12:22.295 "claimed": true, 00:12:22.295 "driver_specific": {}, 00:12:22.295 "memory_domains": [ 00:12:22.295 { 00:12:22.295 "dma_device_id": "system", 00:12:22.295 "dma_device_type": 1 00:12:22.295 }, 00:12:22.295 { 00:12:22.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.295 "dma_device_type": 2 00:12:22.295 } 00:12:22.295 ], 00:12:22.295 "name": "Malloc1", 00:12:22.295 "num_blocks": 1048576, 00:12:22.295 "product_name": "Malloc disk", 00:12:22.295 "supported_io_types": { 00:12:22.295 "abort": true, 00:12:22.295 "compare": false, 00:12:22.295 "compare_and_write": false, 00:12:22.295 "copy": true, 00:12:22.295 "flush": true, 00:12:22.295 "get_zone_info": false, 00:12:22.295 "nvme_admin": false, 00:12:22.295 "nvme_io": false, 00:12:22.295 "nvme_io_md": false, 00:12:22.295 "nvme_iov_md": false, 00:12:22.295 "read": true, 00:12:22.295 "reset": true, 00:12:22.295 "seek_data": false, 00:12:22.295 "seek_hole": false, 00:12:22.295 "unmap": true, 00:12:22.295 "write": true, 00:12:22.295 "write_zeroes": true, 00:12:22.295 "zcopy": true, 00:12:22.295 "zone_append": false, 00:12:22.295 "zone_management": false 00:12:22.295 }, 00:12:22.295 "uuid": "ff6bf163-9afd-4a8b-913f-f2579dc1e8d5", 00:12:22.295 "zoned": false 00:12:22.295 } 00:12:22.295 ]' 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:22.295 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.553 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.553 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:22.553 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.553 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:22.553 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:25.081 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:25.081 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:25.081 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.082 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:25.082 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.082 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:25.082 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:25.082 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:25.082 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:25.082 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:25.082 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:25.082 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:25.082 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:25.082 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:25.082 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:25.082 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:25.082 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:25.082 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:25.082 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:25.649 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:25.649 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:25.649 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:25.649 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.649 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.649 ************************************ 00:12:25.649 START TEST filesystem_in_capsule_ext4 00:12:25.649 ************************************ 00:12:25.649 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:25.649 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:25.649 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:25.649 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:25.649 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:25.649 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:25.649 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:25.649 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:25.649 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:25.649 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:25.649 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:25.649 mke2fs 1.46.5 (30-Dec-2021) 00:12:25.945 Discarding device blocks: 0/522240 done 00:12:25.945 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:25.945 Filesystem UUID: c172c39e-37de-4bec-ba03-afead4a866ee 00:12:25.945 Superblock backups stored on blocks: 00:12:25.945 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:25.945 00:12:25.945 Allocating group tables: 0/64 done 00:12:25.945 Writing inode tables: 0/64 done 00:12:25.945 Creating journal (8192 blocks): done 00:12:25.945 Writing superblocks and filesystem accounting information: 0/64 done 00:12:25.945 00:12:25.945 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:25.945 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.945 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.945 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:25.945 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.945 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 73090 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:26.208 00:12:26.208 real 0m0.340s 00:12:26.208 user 0m0.016s 00:12:26.208 sys 0m0.051s 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:26.208 ************************************ 00:12:26.208 END TEST filesystem_in_capsule_ext4 00:12:26.208 ************************************ 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.208 ************************************ 00:12:26.208 START TEST filesystem_in_capsule_btrfs 00:12:26.208 ************************************ 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:26.208 btrfs-progs v6.6.2 00:12:26.208 See https://btrfs.readthedocs.io for more information. 00:12:26.208 00:12:26.208 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:26.208 NOTE: several default settings have changed in version 5.15, please make sure 00:12:26.208 this does not affect your deployments: 00:12:26.208 - DUP for metadata (-m dup) 00:12:26.208 - enabled no-holes (-O no-holes) 00:12:26.208 - enabled free-space-tree (-R free-space-tree) 00:12:26.208 00:12:26.208 Label: (null) 00:12:26.208 UUID: a50a7d30-a441-46b3-ac20-8cd85ff05694 00:12:26.208 Node size: 16384 00:12:26.208 Sector size: 4096 00:12:26.208 Filesystem size: 510.00MiB 00:12:26.208 Block group profiles: 00:12:26.208 Data: single 8.00MiB 00:12:26.208 Metadata: DUP 32.00MiB 00:12:26.208 System: DUP 8.00MiB 00:12:26.208 SSD detected: yes 00:12:26.208 Zoned device: no 00:12:26.208 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:26.208 Runtime features: free-space-tree 00:12:26.208 Checksum: crc32c 00:12:26.208 Number of devices: 1 00:12:26.208 Devices: 00:12:26.208 ID SIZE PATH 00:12:26.208 1 510.00MiB /dev/nvme0n1p1 00:12:26.208 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 73090 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:26.208 00:12:26.208 real 0m0.220s 00:12:26.208 user 0m0.024s 00:12:26.208 sys 0m0.058s 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:26.208 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:26.208 ************************************ 00:12:26.208 END TEST filesystem_in_capsule_btrfs 00:12:26.208 ************************************ 00:12:26.466 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:26.466 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:26.466 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.466 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.466 ************************************ 00:12:26.466 START TEST filesystem_in_capsule_xfs 00:12:26.466 ************************************ 00:12:26.466 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:26.466 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:26.467 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:26.467 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:26.467 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:26.467 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:26.467 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:26.467 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:26.467 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:26.467 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:26.467 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:26.467 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:26.467 = sectsz=512 attr=2, projid32bit=1 00:12:26.467 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:26.467 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:26.467 data = bsize=4096 blocks=130560, imaxpct=25 00:12:26.467 = sunit=0 swidth=0 blks 00:12:26.467 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:26.467 log =internal log bsize=4096 blocks=16384, version=2 00:12:26.467 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:26.467 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:27.033 Discarding blocks...Done. 00:12:27.033 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:27.033 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:28.935 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:28.935 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:28.935 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:28.935 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:28.935 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:28.935 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:28.935 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 73090 00:12:28.935 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:28.935 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:28.935 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:28.935 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:28.935 00:12:28.935 real 0m2.564s 00:12:28.935 user 0m0.024s 00:12:28.935 sys 0m0.039s 00:12:28.935 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:28.935 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:28.935 ************************************ 00:12:28.935 END TEST filesystem_in_capsule_xfs 00:12:28.935 ************************************ 00:12:28.935 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:28.935 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:28.935 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.935 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:28.935 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:28.935 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:28.935 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.935 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:28.935 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.935 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:28.935 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.935 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.935 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.194 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.194 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:29.194 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 73090 00:12:29.194 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 73090 ']' 00:12:29.194 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 73090 00:12:29.194 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:29.194 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:29.194 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73090 00:12:29.194 killing process with pid 73090 00:12:29.194 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:29.194 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:29.194 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73090' 00:12:29.194 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 73090 00:12:29.194 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 73090 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:29.452 00:12:29.452 real 0m8.328s 00:12:29.452 user 0m31.524s 00:12:29.452 sys 0m1.415s 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.452 ************************************ 00:12:29.452 END TEST nvmf_filesystem_in_capsule 00:12:29.452 ************************************ 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:29.452 rmmod nvme_tcp 00:12:29.452 rmmod nvme_fabrics 00:12:29.452 rmmod nvme_keyring 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:29.452 00:12:29.452 real 0m17.323s 00:12:29.452 user 1m2.525s 00:12:29.452 sys 0m3.300s 00:12:29.452 ************************************ 00:12:29.452 END TEST nvmf_filesystem 00:12:29.452 ************************************ 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:29.452 ************************************ 00:12:29.452 START TEST nvmf_target_discovery 00:12:29.452 ************************************ 00:12:29.452 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:29.710 * Looking for test storage... 00:12:29.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.710 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:29.711 Cannot find device "nvmf_tgt_br" 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:29.711 Cannot find device "nvmf_tgt_br2" 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:29.711 Cannot find device "nvmf_tgt_br" 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:29.711 Cannot find device "nvmf_tgt_br2" 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:29.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:29.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:29.711 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:29.968 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:29.968 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:29.968 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:29.968 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:29.968 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:29.968 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:29.968 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:29.968 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:29.968 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:29.968 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:29.968 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:29.968 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:29.968 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:29.968 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:29.968 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:29.968 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:29.968 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:29.968 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:29.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:12:29.968 00:12:29.968 --- 10.0.0.2 ping statistics --- 00:12:29.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.968 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:29.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:29.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:12:29.969 00:12:29.969 --- 10.0.0.3 ping statistics --- 00:12:29.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.969 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:29.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:12:29.969 00:12:29.969 --- 10.0.0.1 ping statistics --- 00:12:29.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.969 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=73528 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 73528 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 73528 ']' 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.969 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.969 [2024-07-25 05:18:34.102858] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:12:29.969 [2024-07-25 05:18:34.102966] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.226 [2024-07-25 05:18:34.240212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.226 [2024-07-25 05:18:34.332316] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.226 [2024-07-25 05:18:34.332390] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.226 [2024-07-25 05:18:34.332412] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.226 [2024-07-25 05:18:34.332426] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.226 [2024-07-25 05:18:34.332439] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.226 [2024-07-25 05:18:34.332560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.226 [2024-07-25 05:18:34.332665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.226 [2024-07-25 05:18:34.333259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.226 [2024-07-25 05:18:34.333278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.162 [2024-07-25 05:18:35.134800] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.162 Null1 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.162 [2024-07-25 05:18:35.200201] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.162 Null2 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:31.162 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.163 Null3 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.163 Null4 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.163 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -a 10.0.0.2 -s 4420 00:12:31.421 00:12:31.421 Discovery Log Number of Records 6, Generation counter 6 00:12:31.421 =====Discovery Log Entry 0====== 00:12:31.421 trtype: tcp 00:12:31.421 adrfam: ipv4 00:12:31.421 subtype: current discovery subsystem 00:12:31.421 treq: not required 00:12:31.421 portid: 0 00:12:31.421 trsvcid: 4420 00:12:31.421 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:31.421 traddr: 10.0.0.2 00:12:31.421 eflags: explicit discovery connections, duplicate discovery information 00:12:31.421 sectype: none 00:12:31.421 =====Discovery Log Entry 1====== 00:12:31.421 trtype: tcp 00:12:31.421 adrfam: ipv4 00:12:31.421 subtype: nvme subsystem 00:12:31.421 treq: not required 00:12:31.421 portid: 0 00:12:31.421 trsvcid: 4420 00:12:31.421 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:31.421 traddr: 10.0.0.2 00:12:31.421 eflags: none 00:12:31.421 sectype: none 00:12:31.421 =====Discovery Log Entry 2====== 00:12:31.421 trtype: tcp 00:12:31.421 adrfam: ipv4 00:12:31.421 subtype: nvme subsystem 00:12:31.421 treq: not required 00:12:31.421 portid: 0 00:12:31.421 trsvcid: 4420 00:12:31.421 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:31.421 traddr: 10.0.0.2 00:12:31.421 eflags: none 00:12:31.421 sectype: none 00:12:31.421 =====Discovery Log Entry 3====== 00:12:31.421 trtype: tcp 00:12:31.421 adrfam: ipv4 00:12:31.421 subtype: nvme subsystem 00:12:31.421 treq: not required 00:12:31.421 portid: 0 00:12:31.421 trsvcid: 4420 00:12:31.421 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:31.421 traddr: 10.0.0.2 00:12:31.421 eflags: none 00:12:31.421 sectype: none 00:12:31.421 =====Discovery Log Entry 4====== 00:12:31.421 trtype: tcp 00:12:31.421 adrfam: ipv4 00:12:31.421 subtype: nvme subsystem 00:12:31.421 treq: not required 00:12:31.421 portid: 0 00:12:31.421 trsvcid: 4420 00:12:31.421 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:31.421 traddr: 10.0.0.2 00:12:31.421 eflags: none 00:12:31.421 sectype: none 00:12:31.421 =====Discovery Log Entry 5====== 00:12:31.421 trtype: tcp 00:12:31.421 adrfam: ipv4 00:12:31.421 subtype: discovery subsystem referral 00:12:31.421 treq: not required 00:12:31.421 portid: 0 00:12:31.421 trsvcid: 4430 00:12:31.421 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:31.421 traddr: 10.0.0.2 00:12:31.421 eflags: none 00:12:31.421 sectype: none 00:12:31.421 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:31.421 Perform nvmf subsystem discovery via RPC 00:12:31.421 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:31.421 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.421 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.421 [ 00:12:31.421 { 00:12:31.421 "allow_any_host": true, 00:12:31.421 "hosts": [], 00:12:31.421 "listen_addresses": [ 00:12:31.421 { 00:12:31.421 "adrfam": "IPv4", 00:12:31.421 "traddr": "10.0.0.2", 00:12:31.421 "trsvcid": "4420", 00:12:31.421 "trtype": "TCP" 00:12:31.421 } 00:12:31.421 ], 00:12:31.421 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:31.421 "subtype": "Discovery" 00:12:31.421 }, 00:12:31.421 { 00:12:31.421 "allow_any_host": true, 00:12:31.421 "hosts": [], 00:12:31.421 "listen_addresses": [ 00:12:31.421 { 00:12:31.421 "adrfam": "IPv4", 00:12:31.421 "traddr": "10.0.0.2", 00:12:31.421 "trsvcid": "4420", 00:12:31.421 "trtype": "TCP" 00:12:31.421 } 00:12:31.421 ], 00:12:31.421 "max_cntlid": 65519, 00:12:31.421 "max_namespaces": 32, 00:12:31.421 "min_cntlid": 1, 00:12:31.421 "model_number": "SPDK bdev Controller", 00:12:31.421 "namespaces": [ 00:12:31.421 { 00:12:31.421 "bdev_name": "Null1", 00:12:31.421 "name": "Null1", 00:12:31.421 "nguid": "873FD65E50A74CC98810407261A5BC28", 00:12:31.421 "nsid": 1, 00:12:31.421 "uuid": "873fd65e-50a7-4cc9-8810-407261a5bc28" 00:12:31.421 } 00:12:31.421 ], 00:12:31.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:31.421 "serial_number": "SPDK00000000000001", 00:12:31.421 "subtype": "NVMe" 00:12:31.421 }, 00:12:31.421 { 00:12:31.421 "allow_any_host": true, 00:12:31.421 "hosts": [], 00:12:31.421 "listen_addresses": [ 00:12:31.421 { 00:12:31.421 "adrfam": "IPv4", 00:12:31.421 "traddr": "10.0.0.2", 00:12:31.421 "trsvcid": "4420", 00:12:31.421 "trtype": "TCP" 00:12:31.421 } 00:12:31.421 ], 00:12:31.421 "max_cntlid": 65519, 00:12:31.421 "max_namespaces": 32, 00:12:31.421 "min_cntlid": 1, 00:12:31.421 "model_number": "SPDK bdev Controller", 00:12:31.421 "namespaces": [ 00:12:31.421 { 00:12:31.421 "bdev_name": "Null2", 00:12:31.421 "name": "Null2", 00:12:31.421 "nguid": "D3F253E75ADA451AB19C6B2AFC6710E8", 00:12:31.421 "nsid": 1, 00:12:31.421 "uuid": "d3f253e7-5ada-451a-b19c-6b2afc6710e8" 00:12:31.421 } 00:12:31.421 ], 00:12:31.421 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:31.421 "serial_number": "SPDK00000000000002", 00:12:31.421 "subtype": "NVMe" 00:12:31.421 }, 00:12:31.421 { 00:12:31.421 "allow_any_host": true, 00:12:31.421 "hosts": [], 00:12:31.421 "listen_addresses": [ 00:12:31.421 { 00:12:31.421 "adrfam": "IPv4", 00:12:31.421 "traddr": "10.0.0.2", 00:12:31.421 "trsvcid": "4420", 00:12:31.421 "trtype": "TCP" 00:12:31.421 } 00:12:31.421 ], 00:12:31.421 "max_cntlid": 65519, 00:12:31.421 "max_namespaces": 32, 00:12:31.421 "min_cntlid": 1, 00:12:31.421 "model_number": "SPDK bdev Controller", 00:12:31.421 "namespaces": [ 00:12:31.421 { 00:12:31.421 "bdev_name": "Null3", 00:12:31.421 "name": "Null3", 00:12:31.421 "nguid": "A8BE10E30B504BCEA670FC925E788DD6", 00:12:31.421 "nsid": 1, 00:12:31.421 "uuid": "a8be10e3-0b50-4bce-a670-fc925e788dd6" 00:12:31.421 } 00:12:31.421 ], 00:12:31.421 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:31.421 "serial_number": "SPDK00000000000003", 00:12:31.421 "subtype": "NVMe" 00:12:31.421 }, 00:12:31.421 { 00:12:31.421 "allow_any_host": true, 00:12:31.421 "hosts": [], 00:12:31.421 "listen_addresses": [ 00:12:31.421 { 00:12:31.421 "adrfam": "IPv4", 00:12:31.421 "traddr": "10.0.0.2", 00:12:31.421 "trsvcid": "4420", 00:12:31.421 "trtype": "TCP" 00:12:31.421 } 00:12:31.421 ], 00:12:31.421 "max_cntlid": 65519, 00:12:31.421 "max_namespaces": 32, 00:12:31.421 "min_cntlid": 1, 00:12:31.421 "model_number": "SPDK bdev Controller", 00:12:31.421 "namespaces": [ 00:12:31.421 { 00:12:31.421 "bdev_name": "Null4", 00:12:31.421 "name": "Null4", 00:12:31.421 "nguid": "18B179F3FCB44BFA99E4337AC11A31D4", 00:12:31.421 "nsid": 1, 00:12:31.422 "uuid": "18b179f3-fcb4-4bfa-99e4-337ac11a31d4" 00:12:31.422 } 00:12:31.422 ], 00:12:31.422 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:31.422 "serial_number": "SPDK00000000000004", 00:12:31.422 "subtype": "NVMe" 00:12:31.422 } 00:12:31.422 ] 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:31.422 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:31.422 rmmod nvme_tcp 00:12:31.422 rmmod nvme_fabrics 00:12:31.680 rmmod nvme_keyring 00:12:31.680 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:31.680 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:31.680 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:31.680 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 73528 ']' 00:12:31.680 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 73528 00:12:31.680 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 73528 ']' 00:12:31.680 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 73528 00:12:31.680 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:31.681 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:31.681 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73528 00:12:31.681 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:31.681 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:31.681 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73528' 00:12:31.681 killing process with pid 73528 00:12:31.681 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 73528 00:12:31.681 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 73528 00:12:31.681 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:31.681 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:31.681 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:31.681 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.681 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:31.681 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.681 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.681 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.939 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:31.939 00:12:31.939 real 0m2.252s 00:12:31.939 user 0m6.300s 00:12:31.939 sys 0m0.549s 00:12:31.939 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.939 ************************************ 00:12:31.939 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.939 END TEST nvmf_target_discovery 00:12:31.939 ************************************ 00:12:31.939 05:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:31.939 05:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:31.939 05:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.939 05:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:31.939 ************************************ 00:12:31.939 START TEST nvmf_referrals 00:12:31.939 ************************************ 00:12:31.939 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:31.939 * Looking for test storage... 00:12:31.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:31.939 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:31.939 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:31.939 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.939 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:31.940 Cannot find device "nvmf_tgt_br" 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.940 Cannot find device "nvmf_tgt_br2" 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:31.940 Cannot find device "nvmf_tgt_br" 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:31.940 Cannot find device "nvmf_tgt_br2" 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:12:31.940 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:32.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:32.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:32.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:12:32.198 00:12:32.198 --- 10.0.0.2 ping statistics --- 00:12:32.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.198 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:32.198 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:32.198 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:12:32.198 00:12:32.198 --- 10.0.0.3 ping statistics --- 00:12:32.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.198 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:32.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:32.198 00:12:32.198 --- 10.0.0.1 ping statistics --- 00:12:32.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.198 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:32.198 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:32.199 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.199 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:32.199 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:32.457 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:32.457 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:32.457 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:32.457 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.457 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=73756 00:12:32.457 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 73756 00:12:32.457 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 73756 ']' 00:12:32.457 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.457 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.457 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:32.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.457 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.457 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:32.457 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.457 [2024-07-25 05:18:36.467543] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:12:32.457 [2024-07-25 05:18:36.467671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.715 [2024-07-25 05:18:36.632772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.715 [2024-07-25 05:18:36.707843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.715 [2024-07-25 05:18:36.707895] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.715 [2024-07-25 05:18:36.707907] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.715 [2024-07-25 05:18:36.707916] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.715 [2024-07-25 05:18:36.707923] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.715 [2024-07-25 05:18:36.708074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.715 [2024-07-25 05:18:36.708291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.715 [2024-07-25 05:18:36.708749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.715 [2024-07-25 05:18:36.708763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.653 [2024-07-25 05:18:37.498460] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.653 [2024-07-25 05:18:37.522488] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:33.653 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:33.921 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.921 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:33.921 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:33.921 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:33.921 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:33.921 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:33.921 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:33.921 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:33.921 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:33.921 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:33.921 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:33.921 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:33.921 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:33.921 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:33.921 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:33.921 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:34.179 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:34.437 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:34.695 rmmod nvme_tcp 00:12:34.695 rmmod nvme_fabrics 00:12:34.695 rmmod nvme_keyring 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 73756 ']' 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 73756 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 73756 ']' 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 73756 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73756 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:34.695 killing process with pid 73756 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73756' 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 73756 00:12:34.695 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 73756 00:12:34.955 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:34.955 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:34.955 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:34.955 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:34.955 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:34.955 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.955 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.955 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.955 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:34.955 00:12:34.955 real 0m3.040s 00:12:34.955 user 0m10.053s 00:12:34.955 sys 0m0.740s 00:12:34.955 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.955 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.955 ************************************ 00:12:34.955 END TEST nvmf_referrals 00:12:34.955 ************************************ 00:12:34.955 05:18:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:34.955 05:18:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:34.955 05:18:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.955 05:18:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:34.955 ************************************ 00:12:34.955 START TEST nvmf_connect_disconnect 00:12:34.955 ************************************ 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:34.955 * Looking for test storage... 00:12:34.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:34.955 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:34.956 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:35.214 Cannot find device "nvmf_tgt_br" 00:12:35.214 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:12:35.214 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:35.214 Cannot find device "nvmf_tgt_br2" 00:12:35.214 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:12:35.214 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:35.214 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:35.214 Cannot find device "nvmf_tgt_br" 00:12:35.214 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:12:35.214 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:35.214 Cannot find device "nvmf_tgt_br2" 00:12:35.214 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:12:35.214 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:35.214 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:35.214 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:35.214 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:35.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:35.215 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:35.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:12:35.473 00:12:35.473 --- 10.0.0.2 ping statistics --- 00:12:35.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.473 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:35.473 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:35.473 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:12:35.473 00:12:35.473 --- 10.0.0.3 ping statistics --- 00:12:35.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.473 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:35.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:35.473 00:12:35.473 --- 10.0.0.1 ping statistics --- 00:12:35.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.473 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=74061 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 74061 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 74061 ']' 00:12:35.473 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.474 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:35.474 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.474 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:35.474 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.474 [2024-07-25 05:18:39.514933] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:12:35.474 [2024-07-25 05:18:39.515028] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.474 [2024-07-25 05:18:39.647753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.731 [2024-07-25 05:18:39.732580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.731 [2024-07-25 05:18:39.732658] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.731 [2024-07-25 05:18:39.732677] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.731 [2024-07-25 05:18:39.732691] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.731 [2024-07-25 05:18:39.732703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.731 [2024-07-25 05:18:39.732939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.731 [2024-07-25 05:18:39.733468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.731 [2024-07-25 05:18:39.733699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.731 [2024-07-25 05:18:39.733555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.681 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.682 [2024-07-25 05:18:40.554472] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.682 [2024-07-25 05:18:40.618896] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:36.682 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:39.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:48.079 rmmod nvme_tcp 00:12:48.079 rmmod nvme_fabrics 00:12:48.079 rmmod nvme_keyring 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 74061 ']' 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 74061 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 74061 ']' 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 74061 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:48.079 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74061 00:12:48.079 killing process with pid 74061 00:12:48.079 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:48.079 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:48.079 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74061' 00:12:48.079 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 74061 00:12:48.079 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 74061 00:12:48.079 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:48.079 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:48.079 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:48.079 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:48.079 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:48.079 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.079 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.079 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.079 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:48.079 00:12:48.079 real 0m13.218s 00:12:48.079 user 0m48.576s 00:12:48.079 sys 0m1.906s 00:12:48.079 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:48.079 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:48.079 ************************************ 00:12:48.079 END TEST nvmf_connect_disconnect 00:12:48.079 ************************************ 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:48.376 ************************************ 00:12:48.376 START TEST nvmf_multitarget 00:12:48.376 ************************************ 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:48.376 * Looking for test storage... 00:12:48.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:48.376 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:48.377 Cannot find device "nvmf_tgt_br" 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:48.377 Cannot find device "nvmf_tgt_br2" 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:48.377 Cannot find device "nvmf_tgt_br" 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:48.377 Cannot find device "nvmf_tgt_br2" 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:48.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:48.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:48.377 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:48.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:12:48.637 00:12:48.637 --- 10.0.0.2 ping statistics --- 00:12:48.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.637 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:48.637 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:48.637 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:12:48.637 00:12:48.637 --- 10.0.0.3 ping statistics --- 00:12:48.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.637 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:48.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:48.637 00:12:48.637 --- 10.0.0.1 ping statistics --- 00:12:48.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.637 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=74459 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 74459 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 74459 ']' 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:48.637 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.637 [2024-07-25 05:18:52.769775] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:12:48.637 [2024-07-25 05:18:52.769874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.895 [2024-07-25 05:18:52.934702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.895 [2024-07-25 05:18:53.020487] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.895 [2024-07-25 05:18:53.020564] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.895 [2024-07-25 05:18:53.020581] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.895 [2024-07-25 05:18:53.020604] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.895 [2024-07-25 05:18:53.020615] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.895 [2024-07-25 05:18:53.020873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.895 [2024-07-25 05:18:53.020972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.895 [2024-07-25 05:18:53.021320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.895 [2024-07-25 05:18:53.021329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.830 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:49.830 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:49.830 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:49.830 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:49.830 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:49.830 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.830 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:49.830 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:49.830 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:49.830 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:49.830 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:50.089 "nvmf_tgt_1" 00:12:50.089 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:50.089 "nvmf_tgt_2" 00:12:50.089 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:50.089 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:50.346 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:50.347 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:50.347 true 00:12:50.347 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:50.605 true 00:12:50.605 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:50.605 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:50.605 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:50.605 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:50.605 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:50.605 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:50.605 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.864 rmmod nvme_tcp 00:12:50.864 rmmod nvme_fabrics 00:12:50.864 rmmod nvme_keyring 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 74459 ']' 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 74459 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 74459 ']' 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 74459 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74459 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.864 killing process with pid 74459 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74459' 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 74459 00:12:50.864 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 74459 00:12:50.864 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:50.864 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:50.864 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:50.864 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.864 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.864 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.864 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.864 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.122 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:51.122 ************************************ 00:12:51.122 END TEST nvmf_multitarget 00:12:51.122 ************************************ 00:12:51.122 00:12:51.122 real 0m2.782s 00:12:51.122 user 0m9.284s 00:12:51.122 sys 0m0.615s 00:12:51.122 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:51.122 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:51.122 05:18:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:51.122 05:18:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:51.122 05:18:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:51.122 05:18:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:51.122 ************************************ 00:12:51.122 START TEST nvmf_rpc 00:12:51.122 ************************************ 00:12:51.122 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:51.122 * Looking for test storage... 00:12:51.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:51.122 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:51.122 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:51.123 Cannot find device "nvmf_tgt_br" 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:51.123 Cannot find device "nvmf_tgt_br2" 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:51.123 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:51.123 Cannot find device "nvmf_tgt_br" 00:12:51.124 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:12:51.124 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:51.124 Cannot find device "nvmf_tgt_br2" 00:12:51.124 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:12:51.124 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:51.124 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:51.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:51.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:51.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:12:51.421 00:12:51.421 --- 10.0.0.2 ping statistics --- 00:12:51.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.421 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:51.421 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:51.421 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:12:51.421 00:12:51.421 --- 10.0.0.3 ping statistics --- 00:12:51.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.421 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:51.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:12:51.421 00:12:51.421 --- 10.0.0.1 ping statistics --- 00:12:51.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.421 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:51.421 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.422 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:51.422 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:51.422 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.422 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:51.422 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:51.702 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:51.702 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.702 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:51.702 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.702 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=74694 00:12:51.702 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 74694 00:12:51.702 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.702 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 74694 ']' 00:12:51.702 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.702 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.702 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.702 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.702 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.702 [2024-07-25 05:18:55.660739] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:12:51.702 [2024-07-25 05:18:55.660865] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.702 [2024-07-25 05:18:55.800354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.702 [2024-07-25 05:18:55.863177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.702 [2024-07-25 05:18:55.863225] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.702 [2024-07-25 05:18:55.863236] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.702 [2024-07-25 05:18:55.863245] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.702 [2024-07-25 05:18:55.863256] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.702 [2024-07-25 05:18:55.863402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.702 [2024-07-25 05:18:55.863813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.702 [2024-07-25 05:18:55.863933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.702 [2024-07-25 05:18:55.864024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:52.637 "poll_groups": [ 00:12:52.637 { 00:12:52.637 "admin_qpairs": 0, 00:12:52.637 "completed_nvme_io": 0, 00:12:52.637 "current_admin_qpairs": 0, 00:12:52.637 "current_io_qpairs": 0, 00:12:52.637 "io_qpairs": 0, 00:12:52.637 "name": "nvmf_tgt_poll_group_000", 00:12:52.637 "pending_bdev_io": 0, 00:12:52.637 "transports": [] 00:12:52.637 }, 00:12:52.637 { 00:12:52.637 "admin_qpairs": 0, 00:12:52.637 "completed_nvme_io": 0, 00:12:52.637 "current_admin_qpairs": 0, 00:12:52.637 "current_io_qpairs": 0, 00:12:52.637 "io_qpairs": 0, 00:12:52.637 "name": "nvmf_tgt_poll_group_001", 00:12:52.637 "pending_bdev_io": 0, 00:12:52.637 "transports": [] 00:12:52.637 }, 00:12:52.637 { 00:12:52.637 "admin_qpairs": 0, 00:12:52.637 "completed_nvme_io": 0, 00:12:52.637 "current_admin_qpairs": 0, 00:12:52.637 "current_io_qpairs": 0, 00:12:52.637 "io_qpairs": 0, 00:12:52.637 "name": "nvmf_tgt_poll_group_002", 00:12:52.637 "pending_bdev_io": 0, 00:12:52.637 "transports": [] 00:12:52.637 }, 00:12:52.637 { 00:12:52.637 "admin_qpairs": 0, 00:12:52.637 "completed_nvme_io": 0, 00:12:52.637 "current_admin_qpairs": 0, 00:12:52.637 "current_io_qpairs": 0, 00:12:52.637 "io_qpairs": 0, 00:12:52.637 "name": "nvmf_tgt_poll_group_003", 00:12:52.637 "pending_bdev_io": 0, 00:12:52.637 "transports": [] 00:12:52.637 } 00:12:52.637 ], 00:12:52.637 "tick_rate": 2200000000 00:12:52.637 }' 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.637 [2024-07-25 05:18:56.775949] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.637 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.896 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.896 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:52.896 "poll_groups": [ 00:12:52.896 { 00:12:52.896 "admin_qpairs": 0, 00:12:52.896 "completed_nvme_io": 0, 00:12:52.896 "current_admin_qpairs": 0, 00:12:52.896 "current_io_qpairs": 0, 00:12:52.896 "io_qpairs": 0, 00:12:52.896 "name": "nvmf_tgt_poll_group_000", 00:12:52.896 "pending_bdev_io": 0, 00:12:52.896 "transports": [ 00:12:52.896 { 00:12:52.896 "trtype": "TCP" 00:12:52.896 } 00:12:52.896 ] 00:12:52.896 }, 00:12:52.896 { 00:12:52.896 "admin_qpairs": 0, 00:12:52.896 "completed_nvme_io": 0, 00:12:52.896 "current_admin_qpairs": 0, 00:12:52.896 "current_io_qpairs": 0, 00:12:52.896 "io_qpairs": 0, 00:12:52.896 "name": "nvmf_tgt_poll_group_001", 00:12:52.896 "pending_bdev_io": 0, 00:12:52.896 "transports": [ 00:12:52.896 { 00:12:52.896 "trtype": "TCP" 00:12:52.896 } 00:12:52.896 ] 00:12:52.896 }, 00:12:52.896 { 00:12:52.896 "admin_qpairs": 0, 00:12:52.896 "completed_nvme_io": 0, 00:12:52.896 "current_admin_qpairs": 0, 00:12:52.896 "current_io_qpairs": 0, 00:12:52.896 "io_qpairs": 0, 00:12:52.896 "name": "nvmf_tgt_poll_group_002", 00:12:52.896 "pending_bdev_io": 0, 00:12:52.896 "transports": [ 00:12:52.896 { 00:12:52.896 "trtype": "TCP" 00:12:52.896 } 00:12:52.896 ] 00:12:52.896 }, 00:12:52.896 { 00:12:52.896 "admin_qpairs": 0, 00:12:52.896 "completed_nvme_io": 0, 00:12:52.896 "current_admin_qpairs": 0, 00:12:52.896 "current_io_qpairs": 0, 00:12:52.897 "io_qpairs": 0, 00:12:52.897 "name": "nvmf_tgt_poll_group_003", 00:12:52.897 "pending_bdev_io": 0, 00:12:52.897 "transports": [ 00:12:52.897 { 00:12:52.897 "trtype": "TCP" 00:12:52.897 } 00:12:52.897 ] 00:12:52.897 } 00:12:52.897 ], 00:12:52.897 "tick_rate": 2200000000 00:12:52.897 }' 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.897 Malloc1 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.897 [2024-07-25 05:18:56.984671] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -a 10.0.0.2 -s 4420 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -a 10.0.0.2 -s 4420 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:52.897 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -a 10.0.0.2 -s 4420 00:12:52.897 [2024-07-25 05:18:57.006908] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78' 00:12:52.897 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:52.897 could not add new controller: failed to write to nvme-fabrics device 00:12:52.897 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:52.897 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:52.897 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:52.897 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:52.897 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:52.897 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.897 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.897 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.897 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.155 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.155 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:53.155 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.155 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:53.155 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:55.056 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:55.056 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:55.056 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.056 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:55.056 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.056 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:55.056 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.314 [2024-07-25 05:18:59.288114] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78' 00:12:55.314 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:55.314 could not add new controller: failed to write to nvme-fabrics device 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:55.314 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.845 [2024-07-25 05:19:01.551686] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.845 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.846 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.846 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:57.846 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:57.846 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.846 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:57.846 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.767 [2024-07-25 05:19:03.862763] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.767 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:00.036 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.036 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:00.036 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.036 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:00.036 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:01.934 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:01.934 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:01.934 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.934 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:01.934 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.934 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:01.934 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.193 [2024-07-25 05:19:06.162018] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:02.193 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.726 [2024-07-25 05:19:08.453805] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:04.726 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.627 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.627 [2024-07-25 05:19:10.748966] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.628 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.628 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:06.628 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.628 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.628 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.628 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.628 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.628 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.628 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.628 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.886 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.886 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:06.886 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.886 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:06.886 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:08.817 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:08.817 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:08.817 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.817 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:08.817 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.817 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:08.817 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.075 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.075 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:09.075 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:09.075 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.075 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:09.075 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.075 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:09.075 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.075 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.075 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.075 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.075 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.075 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.075 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.075 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.075 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 [2024-07-25 05:19:13.056155] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 [2024-07-25 05:19:13.104165] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 [2024-07-25 05:19:13.152248] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 [2024-07-25 05:19:13.200327] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.076 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.077 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.077 [2024-07-25 05:19:13.248355] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:09.335 "poll_groups": [ 00:13:09.335 { 00:13:09.335 "admin_qpairs": 2, 00:13:09.335 "completed_nvme_io": 66, 00:13:09.335 "current_admin_qpairs": 0, 00:13:09.335 "current_io_qpairs": 0, 00:13:09.335 "io_qpairs": 16, 00:13:09.335 "name": "nvmf_tgt_poll_group_000", 00:13:09.335 "pending_bdev_io": 0, 00:13:09.335 "transports": [ 00:13:09.335 { 00:13:09.335 "trtype": "TCP" 00:13:09.335 } 00:13:09.335 ] 00:13:09.335 }, 00:13:09.335 { 00:13:09.335 "admin_qpairs": 3, 00:13:09.335 "completed_nvme_io": 116, 00:13:09.335 "current_admin_qpairs": 0, 00:13:09.335 "current_io_qpairs": 0, 00:13:09.335 "io_qpairs": 17, 00:13:09.335 "name": "nvmf_tgt_poll_group_001", 00:13:09.335 "pending_bdev_io": 0, 00:13:09.335 "transports": [ 00:13:09.335 { 00:13:09.335 "trtype": "TCP" 00:13:09.335 } 00:13:09.335 ] 00:13:09.335 }, 00:13:09.335 { 00:13:09.335 "admin_qpairs": 1, 00:13:09.335 "completed_nvme_io": 170, 00:13:09.335 "current_admin_qpairs": 0, 00:13:09.335 "current_io_qpairs": 0, 00:13:09.335 "io_qpairs": 19, 00:13:09.335 "name": "nvmf_tgt_poll_group_002", 00:13:09.335 "pending_bdev_io": 0, 00:13:09.335 "transports": [ 00:13:09.335 { 00:13:09.335 "trtype": "TCP" 00:13:09.335 } 00:13:09.335 ] 00:13:09.335 }, 00:13:09.335 { 00:13:09.335 "admin_qpairs": 1, 00:13:09.335 "completed_nvme_io": 68, 00:13:09.335 "current_admin_qpairs": 0, 00:13:09.335 "current_io_qpairs": 0, 00:13:09.335 "io_qpairs": 18, 00:13:09.335 "name": "nvmf_tgt_poll_group_003", 00:13:09.335 "pending_bdev_io": 0, 00:13:09.335 "transports": [ 00:13:09.335 { 00:13:09.335 "trtype": "TCP" 00:13:09.335 } 00:13:09.335 ] 00:13:09.335 } 00:13:09.335 ], 00:13:09.335 "tick_rate": 2200000000 00:13:09.335 }' 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:09.335 rmmod nvme_tcp 00:13:09.335 rmmod nvme_fabrics 00:13:09.335 rmmod nvme_keyring 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 74694 ']' 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 74694 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 74694 ']' 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 74694 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74694 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:09.335 killing process with pid 74694 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74694' 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 74694 00:13:09.335 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 74694 00:13:09.593 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:09.593 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:09.593 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:09.593 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:09.593 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:09.593 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.593 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.593 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.593 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:09.593 00:13:09.593 real 0m18.612s 00:13:09.593 user 1m9.669s 00:13:09.593 sys 0m2.705s 00:13:09.593 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:09.593 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.593 ************************************ 00:13:09.593 END TEST nvmf_rpc 00:13:09.593 ************************************ 00:13:09.593 05:19:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:09.593 05:19:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:09.593 05:19:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:09.593 05:19:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:09.593 ************************************ 00:13:09.593 START TEST nvmf_invalid 00:13:09.593 ************************************ 00:13:09.593 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:09.853 * Looking for test storage... 00:13:09.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:09.853 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:09.854 Cannot find device "nvmf_tgt_br" 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:09.854 Cannot find device "nvmf_tgt_br2" 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:09.854 Cannot find device "nvmf_tgt_br" 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:09.854 Cannot find device "nvmf_tgt_br2" 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:09.854 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:09.854 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:09.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:09.854 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:13:09.854 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:09.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:09.854 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:13:09.854 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:09.854 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:09.854 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:09.854 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:10.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:13:10.112 00:13:10.112 --- 10.0.0.2 ping statistics --- 00:13:10.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.112 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:10.112 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:10.112 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:13:10.112 00:13:10.112 --- 10.0.0.3 ping statistics --- 00:13:10.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.112 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:10.112 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:10.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:13:10.112 00:13:10.113 --- 10.0.0.1 ping statistics --- 00:13:10.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.113 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=75198 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 75198 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 75198 ']' 00:13:10.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:10.113 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:10.371 [2024-07-25 05:19:14.309079] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:13:10.371 [2024-07-25 05:19:14.309660] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.371 [2024-07-25 05:19:14.446647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.371 [2024-07-25 05:19:14.516172] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.371 [2024-07-25 05:19:14.516475] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.371 [2024-07-25 05:19:14.516677] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.371 [2024-07-25 05:19:14.516853] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.371 [2024-07-25 05:19:14.517053] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.371 [2024-07-25 05:19:14.517302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.371 [2024-07-25 05:19:14.517458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.371 [2024-07-25 05:19:14.517520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.371 [2024-07-25 05:19:14.517523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.696 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:10.696 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:10.696 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:10.696 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:10.696 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:10.696 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.696 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:10.696 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9186 00:13:10.954 [2024-07-25 05:19:14.932187] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:10.954 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/25 05:19:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9186 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:13:10.954 request: 00:13:10.954 { 00:13:10.954 "method": "nvmf_create_subsystem", 00:13:10.954 "params": { 00:13:10.954 "nqn": "nqn.2016-06.io.spdk:cnode9186", 00:13:10.954 "tgt_name": "foobar" 00:13:10.954 } 00:13:10.954 } 00:13:10.954 Got JSON-RPC error response 00:13:10.954 GoRPCClient: error on JSON-RPC call' 00:13:10.955 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/25 05:19:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9186 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:13:10.955 request: 00:13:10.955 { 00:13:10.955 "method": "nvmf_create_subsystem", 00:13:10.955 "params": { 00:13:10.955 "nqn": "nqn.2016-06.io.spdk:cnode9186", 00:13:10.955 "tgt_name": "foobar" 00:13:10.955 } 00:13:10.955 } 00:13:10.955 Got JSON-RPC error response 00:13:10.955 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:10.955 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:10.955 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15745 00:13:11.212 [2024-07-25 05:19:15.212496] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15745: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:11.212 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/25 05:19:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15745 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:13:11.212 request: 00:13:11.212 { 00:13:11.212 "method": "nvmf_create_subsystem", 00:13:11.212 "params": { 00:13:11.212 "nqn": "nqn.2016-06.io.spdk:cnode15745", 00:13:11.212 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:13:11.212 } 00:13:11.212 } 00:13:11.212 Got JSON-RPC error response 00:13:11.212 GoRPCClient: error on JSON-RPC call' 00:13:11.212 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/25 05:19:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15745 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:13:11.212 request: 00:13:11.212 { 00:13:11.212 "method": "nvmf_create_subsystem", 00:13:11.212 "params": { 00:13:11.212 "nqn": "nqn.2016-06.io.spdk:cnode15745", 00:13:11.212 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:13:11.212 } 00:13:11.212 } 00:13:11.212 Got JSON-RPC error response 00:13:11.212 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:11.212 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:11.212 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6436 00:13:11.471 [2024-07-25 05:19:15.484739] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6436: invalid model number 'SPDK_Controller' 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/25 05:19:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode6436], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:13:11.471 request: 00:13:11.471 { 00:13:11.471 "method": "nvmf_create_subsystem", 00:13:11.471 "params": { 00:13:11.471 "nqn": "nqn.2016-06.io.spdk:cnode6436", 00:13:11.471 "model_number": "SPDK_Controller\u001f" 00:13:11.471 } 00:13:11.471 } 00:13:11.471 Got JSON-RPC error response 00:13:11.471 GoRPCClient: error on JSON-RPC call' 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/25 05:19:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode6436], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:13:11.471 request: 00:13:11.471 { 00:13:11.471 "method": "nvmf_create_subsystem", 00:13:11.471 "params": { 00:13:11.471 "nqn": "nqn.2016-06.io.spdk:cnode6436", 00:13:11.471 "model_number": "SPDK_Controller\u001f" 00:13:11.471 } 00:13:11.471 } 00:13:11.471 Got JSON-RPC error response 00:13:11.471 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:11.471 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ t == \- ]] 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'tV4:;mO&S0F-P:wN98l~_' 00:13:11.472 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'tV4:;mO&S0F-P:wN98l~_' nqn.2016-06.io.spdk:cnode27752 00:13:11.731 [2024-07-25 05:19:15.817062] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27752: invalid serial number 'tV4:;mO&S0F-P:wN98l~_' 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/25 05:19:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27752 serial_number:tV4:;mO&S0F-P:wN98l~_], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN tV4:;mO&S0F-P:wN98l~_ 00:13:11.731 request: 00:13:11.731 { 00:13:11.731 "method": "nvmf_create_subsystem", 00:13:11.731 "params": { 00:13:11.731 "nqn": "nqn.2016-06.io.spdk:cnode27752", 00:13:11.731 "serial_number": "tV4:;mO&S0F-P:wN98l~_" 00:13:11.731 } 00:13:11.731 } 00:13:11.731 Got JSON-RPC error response 00:13:11.731 GoRPCClient: error on JSON-RPC call' 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/25 05:19:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27752 serial_number:tV4:;mO&S0F-P:wN98l~_], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN tV4:;mO&S0F-P:wN98l~_ 00:13:11.731 request: 00:13:11.731 { 00:13:11.731 "method": "nvmf_create_subsystem", 00:13:11.731 "params": { 00:13:11.731 "nqn": "nqn.2016-06.io.spdk:cnode27752", 00:13:11.731 "serial_number": "tV4:;mO&S0F-P:wN98l~_" 00:13:11.731 } 00:13:11.731 } 00:13:11.731 Got JSON-RPC error response 00:13:11.731 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:11.731 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.732 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.991 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.992 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:11.992 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:11.992 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:11.993 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.993 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.993 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:11.993 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:11.993 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:11.993 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.993 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.993 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:11.993 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:11.993 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:11.993 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.993 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.993 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 6 == \- ]] 00:13:11.993 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '6!BZ;zc/S+(YQTkCDY5yvAp>?+l?>7fJ#=Qfx(+$i' 00:13:11.993 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '6!BZ;zc/S+(YQTkCDY5yvAp>?+l?>7fJ#=Qfx(+$i' nqn.2016-06.io.spdk:cnode28968 00:13:12.250 [2024-07-25 05:19:16.241459] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28968: invalid model number '6!BZ;zc/S+(YQTkCDY5yvAp>?+l?>7fJ#=Qfx(+$i' 00:13:12.250 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/25 05:19:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:6!BZ;zc/S+(YQTkCDY5yvAp>?+l?>7fJ#=Qfx(+$i nqn:nqn.2016-06.io.spdk:cnode28968], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 6!BZ;zc/S+(YQTkCDY5yvAp>?+l?>7fJ#=Qfx(+$i 00:13:12.250 request: 00:13:12.250 { 00:13:12.250 "method": "nvmf_create_subsystem", 00:13:12.250 "params": { 00:13:12.250 "nqn": "nqn.2016-06.io.spdk:cnode28968", 00:13:12.250 "model_number": "6!BZ;zc/S+(YQTkCDY5yvAp>?+l?>7fJ#=Qfx(+$i" 00:13:12.250 } 00:13:12.250 } 00:13:12.250 Got JSON-RPC error response 00:13:12.250 GoRPCClient: error on JSON-RPC call' 00:13:12.250 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/25 05:19:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:6!BZ;zc/S+(YQTkCDY5yvAp>?+l?>7fJ#=Qfx(+$i nqn:nqn.2016-06.io.spdk:cnode28968], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 6!BZ;zc/S+(YQTkCDY5yvAp>?+l?>7fJ#=Qfx(+$i 00:13:12.250 request: 00:13:12.250 { 00:13:12.250 "method": "nvmf_create_subsystem", 00:13:12.250 "params": { 00:13:12.251 "nqn": "nqn.2016-06.io.spdk:cnode28968", 00:13:12.251 "model_number": "6!BZ;zc/S+(YQTkCDY5yvAp>?+l?>7fJ#=Qfx(+$i" 00:13:12.251 } 00:13:12.251 } 00:13:12.251 Got JSON-RPC error response 00:13:12.251 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:12.251 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:12.508 [2024-07-25 05:19:16.489798] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.508 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:12.766 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:12.766 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:12.766 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:12.766 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:12.766 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:13.024 [2024-07-25 05:19:17.101824] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:13.024 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/25 05:19:17 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:13:13.024 request: 00:13:13.024 { 00:13:13.024 "method": "nvmf_subsystem_remove_listener", 00:13:13.024 "params": { 00:13:13.024 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:13.024 "listen_address": { 00:13:13.024 "trtype": "tcp", 00:13:13.024 "traddr": "", 00:13:13.024 "trsvcid": "4421" 00:13:13.024 } 00:13:13.024 } 00:13:13.024 } 00:13:13.024 Got JSON-RPC error response 00:13:13.024 GoRPCClient: error on JSON-RPC call' 00:13:13.024 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/25 05:19:17 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:13:13.024 request: 00:13:13.024 { 00:13:13.024 "method": "nvmf_subsystem_remove_listener", 00:13:13.024 "params": { 00:13:13.024 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:13.024 "listen_address": { 00:13:13.024 "trtype": "tcp", 00:13:13.024 "traddr": "", 00:13:13.024 "trsvcid": "4421" 00:13:13.024 } 00:13:13.024 } 00:13:13.024 } 00:13:13.024 Got JSON-RPC error response 00:13:13.024 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:13.024 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3637 -i 0 00:13:13.283 [2024-07-25 05:19:17.406018] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3637: invalid cntlid range [0-65519] 00:13:13.283 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/25 05:19:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode3637], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:13:13.283 request: 00:13:13.283 { 00:13:13.283 "method": "nvmf_create_subsystem", 00:13:13.283 "params": { 00:13:13.283 "nqn": "nqn.2016-06.io.spdk:cnode3637", 00:13:13.283 "min_cntlid": 0 00:13:13.283 } 00:13:13.283 } 00:13:13.283 Got JSON-RPC error response 00:13:13.283 GoRPCClient: error on JSON-RPC call' 00:13:13.283 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/25 05:19:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode3637], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:13:13.283 request: 00:13:13.283 { 00:13:13.283 "method": "nvmf_create_subsystem", 00:13:13.283 "params": { 00:13:13.283 "nqn": "nqn.2016-06.io.spdk:cnode3637", 00:13:13.283 "min_cntlid": 0 00:13:13.283 } 00:13:13.283 } 00:13:13.283 Got JSON-RPC error response 00:13:13.283 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:13.283 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6176 -i 65520 00:13:13.541 [2024-07-25 05:19:17.714459] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6176: invalid cntlid range [65520-65519] 00:13:13.798 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/25 05:19:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode6176], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:13:13.798 request: 00:13:13.798 { 00:13:13.798 "method": "nvmf_create_subsystem", 00:13:13.798 "params": { 00:13:13.798 "nqn": "nqn.2016-06.io.spdk:cnode6176", 00:13:13.798 "min_cntlid": 65520 00:13:13.798 } 00:13:13.798 } 00:13:13.798 Got JSON-RPC error response 00:13:13.798 GoRPCClient: error on JSON-RPC call' 00:13:13.798 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/25 05:19:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode6176], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:13:13.798 request: 00:13:13.798 { 00:13:13.798 "method": "nvmf_create_subsystem", 00:13:13.798 "params": { 00:13:13.798 "nqn": "nqn.2016-06.io.spdk:cnode6176", 00:13:13.798 "min_cntlid": 65520 00:13:13.798 } 00:13:13.798 } 00:13:13.798 Got JSON-RPC error response 00:13:13.798 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:13.798 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12431 -I 0 00:13:14.056 [2024-07-25 05:19:18.036600] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12431: invalid cntlid range [1-0] 00:13:14.056 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/25 05:19:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode12431], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:13:14.056 request: 00:13:14.056 { 00:13:14.056 "method": "nvmf_create_subsystem", 00:13:14.056 "params": { 00:13:14.056 "nqn": "nqn.2016-06.io.spdk:cnode12431", 00:13:14.056 "max_cntlid": 0 00:13:14.056 } 00:13:14.056 } 00:13:14.056 Got JSON-RPC error response 00:13:14.056 GoRPCClient: error on JSON-RPC call' 00:13:14.056 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/25 05:19:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode12431], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:13:14.056 request: 00:13:14.056 { 00:13:14.056 "method": "nvmf_create_subsystem", 00:13:14.056 "params": { 00:13:14.056 "nqn": "nqn.2016-06.io.spdk:cnode12431", 00:13:14.056 "max_cntlid": 0 00:13:14.056 } 00:13:14.056 } 00:13:14.056 Got JSON-RPC error response 00:13:14.056 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:14.056 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16286 -I 65520 00:13:14.314 [2024-07-25 05:19:18.280822] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16286: invalid cntlid range [1-65520] 00:13:14.315 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/25 05:19:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode16286], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:13:14.315 request: 00:13:14.315 { 00:13:14.315 "method": "nvmf_create_subsystem", 00:13:14.315 "params": { 00:13:14.315 "nqn": "nqn.2016-06.io.spdk:cnode16286", 00:13:14.315 "max_cntlid": 65520 00:13:14.315 } 00:13:14.315 } 00:13:14.315 Got JSON-RPC error response 00:13:14.315 GoRPCClient: error on JSON-RPC call' 00:13:14.315 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/25 05:19:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode16286], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:13:14.315 request: 00:13:14.315 { 00:13:14.315 "method": "nvmf_create_subsystem", 00:13:14.315 "params": { 00:13:14.315 "nqn": "nqn.2016-06.io.spdk:cnode16286", 00:13:14.315 "max_cntlid": 65520 00:13:14.315 } 00:13:14.315 } 00:13:14.315 Got JSON-RPC error response 00:13:14.315 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:14.315 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13895 -i 6 -I 5 00:13:14.572 [2024-07-25 05:19:18.533039] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13895: invalid cntlid range [6-5] 00:13:14.572 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/25 05:19:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode13895], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:13:14.572 request: 00:13:14.572 { 00:13:14.572 "method": "nvmf_create_subsystem", 00:13:14.572 "params": { 00:13:14.572 "nqn": "nqn.2016-06.io.spdk:cnode13895", 00:13:14.572 "min_cntlid": 6, 00:13:14.572 "max_cntlid": 5 00:13:14.572 } 00:13:14.572 } 00:13:14.572 Got JSON-RPC error response 00:13:14.572 GoRPCClient: error on JSON-RPC call' 00:13:14.572 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/25 05:19:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode13895], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:13:14.572 request: 00:13:14.572 { 00:13:14.572 "method": "nvmf_create_subsystem", 00:13:14.572 "params": { 00:13:14.572 "nqn": "nqn.2016-06.io.spdk:cnode13895", 00:13:14.572 "min_cntlid": 6, 00:13:14.572 "max_cntlid": 5 00:13:14.572 } 00:13:14.572 } 00:13:14.572 Got JSON-RPC error response 00:13:14.572 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:14.572 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:14.572 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:14.572 { 00:13:14.572 "name": "foobar", 00:13:14.572 "method": "nvmf_delete_target", 00:13:14.572 "req_id": 1 00:13:14.572 } 00:13:14.572 Got JSON-RPC error response 00:13:14.572 response: 00:13:14.572 { 00:13:14.572 "code": -32602, 00:13:14.572 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:14.572 }' 00:13:14.572 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:14.572 { 00:13:14.572 "name": "foobar", 00:13:14.572 "method": "nvmf_delete_target", 00:13:14.572 "req_id": 1 00:13:14.572 } 00:13:14.572 Got JSON-RPC error response 00:13:14.572 response: 00:13:14.572 { 00:13:14.572 "code": -32602, 00:13:14.572 "message": "The specified target doesn't exist, cannot delete it." 00:13:14.572 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:14.572 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:14.572 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:14.572 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:14.572 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:14.572 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:14.572 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:14.572 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:14.572 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:14.572 rmmod nvme_tcp 00:13:14.572 rmmod nvme_fabrics 00:13:14.572 rmmod nvme_keyring 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 75198 ']' 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 75198 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 75198 ']' 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 75198 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75198 00:13:14.830 killing process with pid 75198 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75198' 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 75198 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 75198 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:14.830 ************************************ 00:13:14.830 END TEST nvmf_invalid 00:13:14.830 ************************************ 00:13:14.830 00:13:14.830 real 0m5.219s 00:13:14.830 user 0m20.978s 00:13:14.830 sys 0m1.141s 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:14.830 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:15.120 ************************************ 00:13:15.120 START TEST nvmf_connect_stress 00:13:15.120 ************************************ 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:15.120 * Looking for test storage... 00:13:15.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.120 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:15.121 Cannot find device "nvmf_tgt_br" 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:15.121 Cannot find device "nvmf_tgt_br2" 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:15.121 Cannot find device "nvmf_tgt_br" 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:15.121 Cannot find device "nvmf_tgt_br2" 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:15.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:15.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:15.121 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:15.380 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:15.380 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:15.380 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:15.380 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:15.380 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:15.380 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:15.380 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:15.380 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:15.380 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:15.380 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:15.380 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:15.380 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:15.380 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:15.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:13:15.381 00:13:15.381 --- 10.0.0.2 ping statistics --- 00:13:15.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.381 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:15.381 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:15.381 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:13:15.381 00:13:15.381 --- 10.0.0.3 ping statistics --- 00:13:15.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.381 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:15.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:13:15.381 00:13:15.381 --- 10.0.0.1 ping statistics --- 00:13:15.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.381 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=75687 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 75687 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 75687 ']' 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:15.381 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.381 [2024-07-25 05:19:19.508022] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:13:15.381 [2024-07-25 05:19:19.508117] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.640 [2024-07-25 05:19:19.645886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:15.640 [2024-07-25 05:19:19.716689] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.640 [2024-07-25 05:19:19.716946] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.640 [2024-07-25 05:19:19.717127] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.640 [2024-07-25 05:19:19.717328] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.640 [2024-07-25 05:19:19.717372] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.640 [2024-07-25 05:19:19.717583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.640 [2024-07-25 05:19:19.717714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.640 [2024-07-25 05:19:19.717735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.640 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:15.640 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:15.640 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:15.640 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:15.640 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.899 [2024-07-25 05:19:19.854246] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.899 [2024-07-25 05:19:19.876115] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.899 NULL1 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=75725 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.899 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.158 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.158 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:16.158 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.158 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.158 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.724 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.724 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:16.724 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.724 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.724 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.982 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.982 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:16.982 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.982 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.982 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.241 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.241 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:17.241 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.241 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.241 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.499 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.499 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:17.499 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.499 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.499 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.757 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.757 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:17.757 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.757 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.757 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.324 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.324 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:18.324 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.324 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.324 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.583 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.583 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:18.583 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.583 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.583 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.841 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.841 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:18.841 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.841 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.841 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.099 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.099 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:19.099 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.099 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.099 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.357 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.357 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:19.357 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.357 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.357 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.922 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.922 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:19.922 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.922 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.922 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.185 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.185 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:20.185 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.185 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.185 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.461 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.461 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:20.461 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.461 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.461 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.719 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.719 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:20.719 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.719 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.719 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.978 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.978 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:20.978 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.978 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.978 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.543 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.543 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:21.543 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.543 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.543 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.802 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.802 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:21.802 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.802 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.802 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.060 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.060 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:22.060 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.060 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.060 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.318 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.318 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:22.318 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.318 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.318 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.576 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.576 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:22.576 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.576 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.576 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.142 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.142 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:23.142 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.142 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.142 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.401 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.401 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:23.401 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.401 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.401 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.660 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.660 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:23.661 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.661 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.661 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.918 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.918 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:23.918 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.918 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.918 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.176 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.176 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:24.176 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.176 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.176 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.744 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.744 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:24.744 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.744 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.744 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.002 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.002 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:25.002 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.002 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.002 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.260 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.260 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:25.260 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.260 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.260 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.517 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.517 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:25.517 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.517 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.517 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.774 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.774 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:25.774 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.774 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.774 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.032 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:26.289 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.289 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75725 00:13:26.289 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (75725) - No such process 00:13:26.289 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 75725 00:13:26.289 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:26.289 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:26.289 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:26.289 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:26.289 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:26.289 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:26.289 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:26.289 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:26.289 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:26.289 rmmod nvme_tcp 00:13:26.289 rmmod nvme_fabrics 00:13:26.290 rmmod nvme_keyring 00:13:26.290 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:26.290 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:26.290 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:26.290 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 75687 ']' 00:13:26.290 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 75687 00:13:26.290 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 75687 ']' 00:13:26.290 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 75687 00:13:26.290 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:26.290 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:26.290 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75687 00:13:26.290 killing process with pid 75687 00:13:26.290 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:26.290 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:26.290 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75687' 00:13:26.290 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 75687 00:13:26.290 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 75687 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:26.548 00:13:26.548 real 0m11.519s 00:13:26.548 user 0m38.609s 00:13:26.548 sys 0m3.201s 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.548 ************************************ 00:13:26.548 END TEST nvmf_connect_stress 00:13:26.548 ************************************ 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:26.548 ************************************ 00:13:26.548 START TEST nvmf_fused_ordering 00:13:26.548 ************************************ 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:26.548 * Looking for test storage... 00:13:26.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.548 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:26.549 Cannot find device "nvmf_tgt_br" 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:13:26.549 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:26.807 Cannot find device "nvmf_tgt_br2" 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:26.807 Cannot find device "nvmf_tgt_br" 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:26.807 Cannot find device "nvmf_tgt_br2" 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:26.807 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:26.807 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:26.807 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:27.065 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:27.065 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:27.065 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:27.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:13:27.065 00:13:27.065 --- 10.0.0.2 ping statistics --- 00:13:27.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.065 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:27.065 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:27.065 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:27.065 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:13:27.065 00:13:27.065 --- 10.0.0.3 ping statistics --- 00:13:27.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.065 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:27.065 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:27.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:13:27.065 00:13:27.065 --- 10.0.0.1 ping statistics --- 00:13:27.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.065 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:27.065 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.065 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:13:27.065 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:27.065 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.065 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:27.065 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:27.065 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.065 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:27.065 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:27.065 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:27.065 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:27.065 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:27.065 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.066 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=76045 00:13:27.066 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:27.066 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 76045 00:13:27.066 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 76045 ']' 00:13:27.066 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.066 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:27.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.066 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.066 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:27.066 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.066 [2024-07-25 05:19:31.099384] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:13:27.066 [2024-07-25 05:19:31.099488] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.066 [2024-07-25 05:19:31.239985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.324 [2024-07-25 05:19:31.340704] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.324 [2024-07-25 05:19:31.340792] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.324 [2024-07-25 05:19:31.340815] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.324 [2024-07-25 05:19:31.340831] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.324 [2024-07-25 05:19:31.340843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.324 [2024-07-25 05:19:31.340893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.260 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:28.260 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:28.260 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:28.260 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:28.260 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.260 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.260 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.261 [2024-07-25 05:19:32.194396] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.261 [2024-07-25 05:19:32.210507] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.261 NULL1 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.261 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:28.261 [2024-07-25 05:19:32.262176] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:13:28.261 [2024-07-25 05:19:32.262229] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76101 ] 00:13:28.520 Attached to nqn.2016-06.io.spdk:cnode1 00:13:28.520 Namespace ID: 1 size: 1GB 00:13:28.520 fused_ordering(0) 00:13:28.520 fused_ordering(1) 00:13:28.520 fused_ordering(2) 00:13:28.520 fused_ordering(3) 00:13:28.520 fused_ordering(4) 00:13:28.520 fused_ordering(5) 00:13:28.520 fused_ordering(6) 00:13:28.520 fused_ordering(7) 00:13:28.520 fused_ordering(8) 00:13:28.520 fused_ordering(9) 00:13:28.520 fused_ordering(10) 00:13:28.520 fused_ordering(11) 00:13:28.520 fused_ordering(12) 00:13:28.520 fused_ordering(13) 00:13:28.520 fused_ordering(14) 00:13:28.520 fused_ordering(15) 00:13:28.520 fused_ordering(16) 00:13:28.520 fused_ordering(17) 00:13:28.520 fused_ordering(18) 00:13:28.520 fused_ordering(19) 00:13:28.520 fused_ordering(20) 00:13:28.520 fused_ordering(21) 00:13:28.520 fused_ordering(22) 00:13:28.520 fused_ordering(23) 00:13:28.520 fused_ordering(24) 00:13:28.520 fused_ordering(25) 00:13:28.520 fused_ordering(26) 00:13:28.520 fused_ordering(27) 00:13:28.520 fused_ordering(28) 00:13:28.520 fused_ordering(29) 00:13:28.520 fused_ordering(30) 00:13:28.520 fused_ordering(31) 00:13:28.520 fused_ordering(32) 00:13:28.520 fused_ordering(33) 00:13:28.520 fused_ordering(34) 00:13:28.520 fused_ordering(35) 00:13:28.520 fused_ordering(36) 00:13:28.520 fused_ordering(37) 00:13:28.520 fused_ordering(38) 00:13:28.520 fused_ordering(39) 00:13:28.520 fused_ordering(40) 00:13:28.520 fused_ordering(41) 00:13:28.520 fused_ordering(42) 00:13:28.520 fused_ordering(43) 00:13:28.520 fused_ordering(44) 00:13:28.520 fused_ordering(45) 00:13:28.520 fused_ordering(46) 00:13:28.520 fused_ordering(47) 00:13:28.520 fused_ordering(48) 00:13:28.520 fused_ordering(49) 00:13:28.520 fused_ordering(50) 00:13:28.520 fused_ordering(51) 00:13:28.520 fused_ordering(52) 00:13:28.520 fused_ordering(53) 00:13:28.520 fused_ordering(54) 00:13:28.520 fused_ordering(55) 00:13:28.520 fused_ordering(56) 00:13:28.520 fused_ordering(57) 00:13:28.520 fused_ordering(58) 00:13:28.520 fused_ordering(59) 00:13:28.520 fused_ordering(60) 00:13:28.520 fused_ordering(61) 00:13:28.520 fused_ordering(62) 00:13:28.520 fused_ordering(63) 00:13:28.520 fused_ordering(64) 00:13:28.520 fused_ordering(65) 00:13:28.520 fused_ordering(66) 00:13:28.520 fused_ordering(67) 00:13:28.520 fused_ordering(68) 00:13:28.520 fused_ordering(69) 00:13:28.520 fused_ordering(70) 00:13:28.520 fused_ordering(71) 00:13:28.520 fused_ordering(72) 00:13:28.520 fused_ordering(73) 00:13:28.520 fused_ordering(74) 00:13:28.520 fused_ordering(75) 00:13:28.520 fused_ordering(76) 00:13:28.520 fused_ordering(77) 00:13:28.520 fused_ordering(78) 00:13:28.520 fused_ordering(79) 00:13:28.520 fused_ordering(80) 00:13:28.520 fused_ordering(81) 00:13:28.520 fused_ordering(82) 00:13:28.520 fused_ordering(83) 00:13:28.520 fused_ordering(84) 00:13:28.520 fused_ordering(85) 00:13:28.520 fused_ordering(86) 00:13:28.520 fused_ordering(87) 00:13:28.520 fused_ordering(88) 00:13:28.520 fused_ordering(89) 00:13:28.520 fused_ordering(90) 00:13:28.520 fused_ordering(91) 00:13:28.520 fused_ordering(92) 00:13:28.520 fused_ordering(93) 00:13:28.520 fused_ordering(94) 00:13:28.520 fused_ordering(95) 00:13:28.520 fused_ordering(96) 00:13:28.520 fused_ordering(97) 00:13:28.520 fused_ordering(98) 00:13:28.520 fused_ordering(99) 00:13:28.520 fused_ordering(100) 00:13:28.520 fused_ordering(101) 00:13:28.520 fused_ordering(102) 00:13:28.520 fused_ordering(103) 00:13:28.520 fused_ordering(104) 00:13:28.520 fused_ordering(105) 00:13:28.520 fused_ordering(106) 00:13:28.520 fused_ordering(107) 00:13:28.520 fused_ordering(108) 00:13:28.520 fused_ordering(109) 00:13:28.520 fused_ordering(110) 00:13:28.520 fused_ordering(111) 00:13:28.520 fused_ordering(112) 00:13:28.520 fused_ordering(113) 00:13:28.520 fused_ordering(114) 00:13:28.520 fused_ordering(115) 00:13:28.520 fused_ordering(116) 00:13:28.520 fused_ordering(117) 00:13:28.520 fused_ordering(118) 00:13:28.520 fused_ordering(119) 00:13:28.520 fused_ordering(120) 00:13:28.520 fused_ordering(121) 00:13:28.520 fused_ordering(122) 00:13:28.520 fused_ordering(123) 00:13:28.520 fused_ordering(124) 00:13:28.520 fused_ordering(125) 00:13:28.520 fused_ordering(126) 00:13:28.520 fused_ordering(127) 00:13:28.520 fused_ordering(128) 00:13:28.520 fused_ordering(129) 00:13:28.520 fused_ordering(130) 00:13:28.520 fused_ordering(131) 00:13:28.520 fused_ordering(132) 00:13:28.520 fused_ordering(133) 00:13:28.520 fused_ordering(134) 00:13:28.520 fused_ordering(135) 00:13:28.520 fused_ordering(136) 00:13:28.520 fused_ordering(137) 00:13:28.520 fused_ordering(138) 00:13:28.520 fused_ordering(139) 00:13:28.520 fused_ordering(140) 00:13:28.520 fused_ordering(141) 00:13:28.520 fused_ordering(142) 00:13:28.520 fused_ordering(143) 00:13:28.520 fused_ordering(144) 00:13:28.520 fused_ordering(145) 00:13:28.520 fused_ordering(146) 00:13:28.520 fused_ordering(147) 00:13:28.520 fused_ordering(148) 00:13:28.520 fused_ordering(149) 00:13:28.520 fused_ordering(150) 00:13:28.520 fused_ordering(151) 00:13:28.520 fused_ordering(152) 00:13:28.520 fused_ordering(153) 00:13:28.520 fused_ordering(154) 00:13:28.520 fused_ordering(155) 00:13:28.520 fused_ordering(156) 00:13:28.520 fused_ordering(157) 00:13:28.520 fused_ordering(158) 00:13:28.520 fused_ordering(159) 00:13:28.520 fused_ordering(160) 00:13:28.520 fused_ordering(161) 00:13:28.520 fused_ordering(162) 00:13:28.520 fused_ordering(163) 00:13:28.520 fused_ordering(164) 00:13:28.520 fused_ordering(165) 00:13:28.520 fused_ordering(166) 00:13:28.520 fused_ordering(167) 00:13:28.520 fused_ordering(168) 00:13:28.520 fused_ordering(169) 00:13:28.520 fused_ordering(170) 00:13:28.520 fused_ordering(171) 00:13:28.520 fused_ordering(172) 00:13:28.520 fused_ordering(173) 00:13:28.520 fused_ordering(174) 00:13:28.520 fused_ordering(175) 00:13:28.520 fused_ordering(176) 00:13:28.520 fused_ordering(177) 00:13:28.520 fused_ordering(178) 00:13:28.520 fused_ordering(179) 00:13:28.520 fused_ordering(180) 00:13:28.520 fused_ordering(181) 00:13:28.520 fused_ordering(182) 00:13:28.520 fused_ordering(183) 00:13:28.520 fused_ordering(184) 00:13:28.520 fused_ordering(185) 00:13:28.520 fused_ordering(186) 00:13:28.520 fused_ordering(187) 00:13:28.520 fused_ordering(188) 00:13:28.520 fused_ordering(189) 00:13:28.520 fused_ordering(190) 00:13:28.520 fused_ordering(191) 00:13:28.520 fused_ordering(192) 00:13:28.520 fused_ordering(193) 00:13:28.520 fused_ordering(194) 00:13:28.520 fused_ordering(195) 00:13:28.520 fused_ordering(196) 00:13:28.520 fused_ordering(197) 00:13:28.520 fused_ordering(198) 00:13:28.520 fused_ordering(199) 00:13:28.520 fused_ordering(200) 00:13:28.520 fused_ordering(201) 00:13:28.520 fused_ordering(202) 00:13:28.520 fused_ordering(203) 00:13:28.520 fused_ordering(204) 00:13:28.520 fused_ordering(205) 00:13:29.086 fused_ordering(206) 00:13:29.086 fused_ordering(207) 00:13:29.086 fused_ordering(208) 00:13:29.086 fused_ordering(209) 00:13:29.086 fused_ordering(210) 00:13:29.086 fused_ordering(211) 00:13:29.086 fused_ordering(212) 00:13:29.086 fused_ordering(213) 00:13:29.086 fused_ordering(214) 00:13:29.086 fused_ordering(215) 00:13:29.086 fused_ordering(216) 00:13:29.086 fused_ordering(217) 00:13:29.086 fused_ordering(218) 00:13:29.086 fused_ordering(219) 00:13:29.086 fused_ordering(220) 00:13:29.086 fused_ordering(221) 00:13:29.086 fused_ordering(222) 00:13:29.086 fused_ordering(223) 00:13:29.086 fused_ordering(224) 00:13:29.086 fused_ordering(225) 00:13:29.086 fused_ordering(226) 00:13:29.086 fused_ordering(227) 00:13:29.086 fused_ordering(228) 00:13:29.086 fused_ordering(229) 00:13:29.086 fused_ordering(230) 00:13:29.086 fused_ordering(231) 00:13:29.086 fused_ordering(232) 00:13:29.086 fused_ordering(233) 00:13:29.086 fused_ordering(234) 00:13:29.086 fused_ordering(235) 00:13:29.086 fused_ordering(236) 00:13:29.086 fused_ordering(237) 00:13:29.086 fused_ordering(238) 00:13:29.086 fused_ordering(239) 00:13:29.086 fused_ordering(240) 00:13:29.086 fused_ordering(241) 00:13:29.086 fused_ordering(242) 00:13:29.086 fused_ordering(243) 00:13:29.086 fused_ordering(244) 00:13:29.086 fused_ordering(245) 00:13:29.086 fused_ordering(246) 00:13:29.086 fused_ordering(247) 00:13:29.086 fused_ordering(248) 00:13:29.086 fused_ordering(249) 00:13:29.086 fused_ordering(250) 00:13:29.086 fused_ordering(251) 00:13:29.086 fused_ordering(252) 00:13:29.086 fused_ordering(253) 00:13:29.086 fused_ordering(254) 00:13:29.086 fused_ordering(255) 00:13:29.086 fused_ordering(256) 00:13:29.086 fused_ordering(257) 00:13:29.086 fused_ordering(258) 00:13:29.086 fused_ordering(259) 00:13:29.086 fused_ordering(260) 00:13:29.086 fused_ordering(261) 00:13:29.086 fused_ordering(262) 00:13:29.086 fused_ordering(263) 00:13:29.086 fused_ordering(264) 00:13:29.086 fused_ordering(265) 00:13:29.086 fused_ordering(266) 00:13:29.086 fused_ordering(267) 00:13:29.086 fused_ordering(268) 00:13:29.086 fused_ordering(269) 00:13:29.086 fused_ordering(270) 00:13:29.086 fused_ordering(271) 00:13:29.086 fused_ordering(272) 00:13:29.086 fused_ordering(273) 00:13:29.086 fused_ordering(274) 00:13:29.086 fused_ordering(275) 00:13:29.086 fused_ordering(276) 00:13:29.086 fused_ordering(277) 00:13:29.086 fused_ordering(278) 00:13:29.086 fused_ordering(279) 00:13:29.086 fused_ordering(280) 00:13:29.086 fused_ordering(281) 00:13:29.086 fused_ordering(282) 00:13:29.086 fused_ordering(283) 00:13:29.086 fused_ordering(284) 00:13:29.086 fused_ordering(285) 00:13:29.086 fused_ordering(286) 00:13:29.086 fused_ordering(287) 00:13:29.086 fused_ordering(288) 00:13:29.086 fused_ordering(289) 00:13:29.086 fused_ordering(290) 00:13:29.086 fused_ordering(291) 00:13:29.086 fused_ordering(292) 00:13:29.086 fused_ordering(293) 00:13:29.086 fused_ordering(294) 00:13:29.086 fused_ordering(295) 00:13:29.086 fused_ordering(296) 00:13:29.086 fused_ordering(297) 00:13:29.086 fused_ordering(298) 00:13:29.086 fused_ordering(299) 00:13:29.086 fused_ordering(300) 00:13:29.086 fused_ordering(301) 00:13:29.086 fused_ordering(302) 00:13:29.086 fused_ordering(303) 00:13:29.086 fused_ordering(304) 00:13:29.086 fused_ordering(305) 00:13:29.086 fused_ordering(306) 00:13:29.086 fused_ordering(307) 00:13:29.086 fused_ordering(308) 00:13:29.086 fused_ordering(309) 00:13:29.086 fused_ordering(310) 00:13:29.086 fused_ordering(311) 00:13:29.086 fused_ordering(312) 00:13:29.086 fused_ordering(313) 00:13:29.087 fused_ordering(314) 00:13:29.087 fused_ordering(315) 00:13:29.087 fused_ordering(316) 00:13:29.087 fused_ordering(317) 00:13:29.087 fused_ordering(318) 00:13:29.087 fused_ordering(319) 00:13:29.087 fused_ordering(320) 00:13:29.087 fused_ordering(321) 00:13:29.087 fused_ordering(322) 00:13:29.087 fused_ordering(323) 00:13:29.087 fused_ordering(324) 00:13:29.087 fused_ordering(325) 00:13:29.087 fused_ordering(326) 00:13:29.087 fused_ordering(327) 00:13:29.087 fused_ordering(328) 00:13:29.087 fused_ordering(329) 00:13:29.087 fused_ordering(330) 00:13:29.087 fused_ordering(331) 00:13:29.087 fused_ordering(332) 00:13:29.087 fused_ordering(333) 00:13:29.087 fused_ordering(334) 00:13:29.087 fused_ordering(335) 00:13:29.087 fused_ordering(336) 00:13:29.087 fused_ordering(337) 00:13:29.087 fused_ordering(338) 00:13:29.087 fused_ordering(339) 00:13:29.087 fused_ordering(340) 00:13:29.087 fused_ordering(341) 00:13:29.087 fused_ordering(342) 00:13:29.087 fused_ordering(343) 00:13:29.087 fused_ordering(344) 00:13:29.087 fused_ordering(345) 00:13:29.087 fused_ordering(346) 00:13:29.087 fused_ordering(347) 00:13:29.087 fused_ordering(348) 00:13:29.087 fused_ordering(349) 00:13:29.087 fused_ordering(350) 00:13:29.087 fused_ordering(351) 00:13:29.087 fused_ordering(352) 00:13:29.087 fused_ordering(353) 00:13:29.087 fused_ordering(354) 00:13:29.087 fused_ordering(355) 00:13:29.087 fused_ordering(356) 00:13:29.087 fused_ordering(357) 00:13:29.087 fused_ordering(358) 00:13:29.087 fused_ordering(359) 00:13:29.087 fused_ordering(360) 00:13:29.087 fused_ordering(361) 00:13:29.087 fused_ordering(362) 00:13:29.087 fused_ordering(363) 00:13:29.087 fused_ordering(364) 00:13:29.087 fused_ordering(365) 00:13:29.087 fused_ordering(366) 00:13:29.087 fused_ordering(367) 00:13:29.087 fused_ordering(368) 00:13:29.087 fused_ordering(369) 00:13:29.087 fused_ordering(370) 00:13:29.087 fused_ordering(371) 00:13:29.087 fused_ordering(372) 00:13:29.087 fused_ordering(373) 00:13:29.087 fused_ordering(374) 00:13:29.087 fused_ordering(375) 00:13:29.087 fused_ordering(376) 00:13:29.087 fused_ordering(377) 00:13:29.087 fused_ordering(378) 00:13:29.087 fused_ordering(379) 00:13:29.087 fused_ordering(380) 00:13:29.087 fused_ordering(381) 00:13:29.087 fused_ordering(382) 00:13:29.087 fused_ordering(383) 00:13:29.087 fused_ordering(384) 00:13:29.087 fused_ordering(385) 00:13:29.087 fused_ordering(386) 00:13:29.087 fused_ordering(387) 00:13:29.087 fused_ordering(388) 00:13:29.087 fused_ordering(389) 00:13:29.087 fused_ordering(390) 00:13:29.087 fused_ordering(391) 00:13:29.087 fused_ordering(392) 00:13:29.087 fused_ordering(393) 00:13:29.087 fused_ordering(394) 00:13:29.087 fused_ordering(395) 00:13:29.087 fused_ordering(396) 00:13:29.087 fused_ordering(397) 00:13:29.087 fused_ordering(398) 00:13:29.087 fused_ordering(399) 00:13:29.087 fused_ordering(400) 00:13:29.087 fused_ordering(401) 00:13:29.087 fused_ordering(402) 00:13:29.087 fused_ordering(403) 00:13:29.087 fused_ordering(404) 00:13:29.087 fused_ordering(405) 00:13:29.087 fused_ordering(406) 00:13:29.087 fused_ordering(407) 00:13:29.087 fused_ordering(408) 00:13:29.087 fused_ordering(409) 00:13:29.087 fused_ordering(410) 00:13:29.349 fused_ordering(411) 00:13:29.349 fused_ordering(412) 00:13:29.349 fused_ordering(413) 00:13:29.349 fused_ordering(414) 00:13:29.349 fused_ordering(415) 00:13:29.349 fused_ordering(416) 00:13:29.349 fused_ordering(417) 00:13:29.349 fused_ordering(418) 00:13:29.349 fused_ordering(419) 00:13:29.349 fused_ordering(420) 00:13:29.349 fused_ordering(421) 00:13:29.349 fused_ordering(422) 00:13:29.349 fused_ordering(423) 00:13:29.349 fused_ordering(424) 00:13:29.349 fused_ordering(425) 00:13:29.349 fused_ordering(426) 00:13:29.349 fused_ordering(427) 00:13:29.349 fused_ordering(428) 00:13:29.349 fused_ordering(429) 00:13:29.349 fused_ordering(430) 00:13:29.349 fused_ordering(431) 00:13:29.349 fused_ordering(432) 00:13:29.349 fused_ordering(433) 00:13:29.349 fused_ordering(434) 00:13:29.349 fused_ordering(435) 00:13:29.349 fused_ordering(436) 00:13:29.349 fused_ordering(437) 00:13:29.349 fused_ordering(438) 00:13:29.349 fused_ordering(439) 00:13:29.349 fused_ordering(440) 00:13:29.349 fused_ordering(441) 00:13:29.349 fused_ordering(442) 00:13:29.349 fused_ordering(443) 00:13:29.349 fused_ordering(444) 00:13:29.349 fused_ordering(445) 00:13:29.349 fused_ordering(446) 00:13:29.349 fused_ordering(447) 00:13:29.349 fused_ordering(448) 00:13:29.349 fused_ordering(449) 00:13:29.349 fused_ordering(450) 00:13:29.349 fused_ordering(451) 00:13:29.349 fused_ordering(452) 00:13:29.349 fused_ordering(453) 00:13:29.349 fused_ordering(454) 00:13:29.349 fused_ordering(455) 00:13:29.349 fused_ordering(456) 00:13:29.349 fused_ordering(457) 00:13:29.349 fused_ordering(458) 00:13:29.349 fused_ordering(459) 00:13:29.349 fused_ordering(460) 00:13:29.349 fused_ordering(461) 00:13:29.349 fused_ordering(462) 00:13:29.349 fused_ordering(463) 00:13:29.349 fused_ordering(464) 00:13:29.349 fused_ordering(465) 00:13:29.349 fused_ordering(466) 00:13:29.349 fused_ordering(467) 00:13:29.349 fused_ordering(468) 00:13:29.349 fused_ordering(469) 00:13:29.349 fused_ordering(470) 00:13:29.349 fused_ordering(471) 00:13:29.349 fused_ordering(472) 00:13:29.349 fused_ordering(473) 00:13:29.349 fused_ordering(474) 00:13:29.349 fused_ordering(475) 00:13:29.349 fused_ordering(476) 00:13:29.349 fused_ordering(477) 00:13:29.349 fused_ordering(478) 00:13:29.349 fused_ordering(479) 00:13:29.349 fused_ordering(480) 00:13:29.349 fused_ordering(481) 00:13:29.349 fused_ordering(482) 00:13:29.349 fused_ordering(483) 00:13:29.349 fused_ordering(484) 00:13:29.349 fused_ordering(485) 00:13:29.349 fused_ordering(486) 00:13:29.349 fused_ordering(487) 00:13:29.349 fused_ordering(488) 00:13:29.349 fused_ordering(489) 00:13:29.349 fused_ordering(490) 00:13:29.349 fused_ordering(491) 00:13:29.349 fused_ordering(492) 00:13:29.349 fused_ordering(493) 00:13:29.349 fused_ordering(494) 00:13:29.349 fused_ordering(495) 00:13:29.349 fused_ordering(496) 00:13:29.349 fused_ordering(497) 00:13:29.349 fused_ordering(498) 00:13:29.349 fused_ordering(499) 00:13:29.349 fused_ordering(500) 00:13:29.349 fused_ordering(501) 00:13:29.349 fused_ordering(502) 00:13:29.349 fused_ordering(503) 00:13:29.349 fused_ordering(504) 00:13:29.349 fused_ordering(505) 00:13:29.349 fused_ordering(506) 00:13:29.349 fused_ordering(507) 00:13:29.349 fused_ordering(508) 00:13:29.349 fused_ordering(509) 00:13:29.349 fused_ordering(510) 00:13:29.349 fused_ordering(511) 00:13:29.349 fused_ordering(512) 00:13:29.349 fused_ordering(513) 00:13:29.349 fused_ordering(514) 00:13:29.349 fused_ordering(515) 00:13:29.349 fused_ordering(516) 00:13:29.349 fused_ordering(517) 00:13:29.349 fused_ordering(518) 00:13:29.349 fused_ordering(519) 00:13:29.349 fused_ordering(520) 00:13:29.349 fused_ordering(521) 00:13:29.349 fused_ordering(522) 00:13:29.349 fused_ordering(523) 00:13:29.349 fused_ordering(524) 00:13:29.349 fused_ordering(525) 00:13:29.349 fused_ordering(526) 00:13:29.349 fused_ordering(527) 00:13:29.349 fused_ordering(528) 00:13:29.349 fused_ordering(529) 00:13:29.349 fused_ordering(530) 00:13:29.349 fused_ordering(531) 00:13:29.349 fused_ordering(532) 00:13:29.349 fused_ordering(533) 00:13:29.349 fused_ordering(534) 00:13:29.349 fused_ordering(535) 00:13:29.349 fused_ordering(536) 00:13:29.349 fused_ordering(537) 00:13:29.349 fused_ordering(538) 00:13:29.349 fused_ordering(539) 00:13:29.349 fused_ordering(540) 00:13:29.349 fused_ordering(541) 00:13:29.349 fused_ordering(542) 00:13:29.349 fused_ordering(543) 00:13:29.349 fused_ordering(544) 00:13:29.349 fused_ordering(545) 00:13:29.349 fused_ordering(546) 00:13:29.349 fused_ordering(547) 00:13:29.349 fused_ordering(548) 00:13:29.349 fused_ordering(549) 00:13:29.349 fused_ordering(550) 00:13:29.349 fused_ordering(551) 00:13:29.349 fused_ordering(552) 00:13:29.349 fused_ordering(553) 00:13:29.349 fused_ordering(554) 00:13:29.349 fused_ordering(555) 00:13:29.349 fused_ordering(556) 00:13:29.349 fused_ordering(557) 00:13:29.349 fused_ordering(558) 00:13:29.349 fused_ordering(559) 00:13:29.349 fused_ordering(560) 00:13:29.349 fused_ordering(561) 00:13:29.349 fused_ordering(562) 00:13:29.349 fused_ordering(563) 00:13:29.349 fused_ordering(564) 00:13:29.349 fused_ordering(565) 00:13:29.349 fused_ordering(566) 00:13:29.349 fused_ordering(567) 00:13:29.349 fused_ordering(568) 00:13:29.349 fused_ordering(569) 00:13:29.349 fused_ordering(570) 00:13:29.349 fused_ordering(571) 00:13:29.349 fused_ordering(572) 00:13:29.349 fused_ordering(573) 00:13:29.349 fused_ordering(574) 00:13:29.350 fused_ordering(575) 00:13:29.350 fused_ordering(576) 00:13:29.350 fused_ordering(577) 00:13:29.350 fused_ordering(578) 00:13:29.350 fused_ordering(579) 00:13:29.350 fused_ordering(580) 00:13:29.350 fused_ordering(581) 00:13:29.350 fused_ordering(582) 00:13:29.350 fused_ordering(583) 00:13:29.350 fused_ordering(584) 00:13:29.350 fused_ordering(585) 00:13:29.350 fused_ordering(586) 00:13:29.350 fused_ordering(587) 00:13:29.350 fused_ordering(588) 00:13:29.350 fused_ordering(589) 00:13:29.350 fused_ordering(590) 00:13:29.350 fused_ordering(591) 00:13:29.350 fused_ordering(592) 00:13:29.350 fused_ordering(593) 00:13:29.350 fused_ordering(594) 00:13:29.350 fused_ordering(595) 00:13:29.350 fused_ordering(596) 00:13:29.350 fused_ordering(597) 00:13:29.350 fused_ordering(598) 00:13:29.350 fused_ordering(599) 00:13:29.350 fused_ordering(600) 00:13:29.350 fused_ordering(601) 00:13:29.350 fused_ordering(602) 00:13:29.350 fused_ordering(603) 00:13:29.350 fused_ordering(604) 00:13:29.350 fused_ordering(605) 00:13:29.350 fused_ordering(606) 00:13:29.350 fused_ordering(607) 00:13:29.350 fused_ordering(608) 00:13:29.350 fused_ordering(609) 00:13:29.350 fused_ordering(610) 00:13:29.350 fused_ordering(611) 00:13:29.350 fused_ordering(612) 00:13:29.350 fused_ordering(613) 00:13:29.350 fused_ordering(614) 00:13:29.350 fused_ordering(615) 00:13:29.917 fused_ordering(616) 00:13:29.917 fused_ordering(617) 00:13:29.917 fused_ordering(618) 00:13:29.917 fused_ordering(619) 00:13:29.917 fused_ordering(620) 00:13:29.917 fused_ordering(621) 00:13:29.917 fused_ordering(622) 00:13:29.917 fused_ordering(623) 00:13:29.917 fused_ordering(624) 00:13:29.917 fused_ordering(625) 00:13:29.917 fused_ordering(626) 00:13:29.917 fused_ordering(627) 00:13:29.917 fused_ordering(628) 00:13:29.917 fused_ordering(629) 00:13:29.917 fused_ordering(630) 00:13:29.917 fused_ordering(631) 00:13:29.917 fused_ordering(632) 00:13:29.917 fused_ordering(633) 00:13:29.917 fused_ordering(634) 00:13:29.917 fused_ordering(635) 00:13:29.917 fused_ordering(636) 00:13:29.917 fused_ordering(637) 00:13:29.917 fused_ordering(638) 00:13:29.917 fused_ordering(639) 00:13:29.917 fused_ordering(640) 00:13:29.917 fused_ordering(641) 00:13:29.917 fused_ordering(642) 00:13:29.917 fused_ordering(643) 00:13:29.917 fused_ordering(644) 00:13:29.917 fused_ordering(645) 00:13:29.917 fused_ordering(646) 00:13:29.917 fused_ordering(647) 00:13:29.917 fused_ordering(648) 00:13:29.917 fused_ordering(649) 00:13:29.917 fused_ordering(650) 00:13:29.917 fused_ordering(651) 00:13:29.917 fused_ordering(652) 00:13:29.917 fused_ordering(653) 00:13:29.917 fused_ordering(654) 00:13:29.917 fused_ordering(655) 00:13:29.917 fused_ordering(656) 00:13:29.917 fused_ordering(657) 00:13:29.917 fused_ordering(658) 00:13:29.917 fused_ordering(659) 00:13:29.917 fused_ordering(660) 00:13:29.917 fused_ordering(661) 00:13:29.917 fused_ordering(662) 00:13:29.917 fused_ordering(663) 00:13:29.917 fused_ordering(664) 00:13:29.917 fused_ordering(665) 00:13:29.917 fused_ordering(666) 00:13:29.917 fused_ordering(667) 00:13:29.917 fused_ordering(668) 00:13:29.917 fused_ordering(669) 00:13:29.917 fused_ordering(670) 00:13:29.917 fused_ordering(671) 00:13:29.918 fused_ordering(672) 00:13:29.918 fused_ordering(673) 00:13:29.918 fused_ordering(674) 00:13:29.918 fused_ordering(675) 00:13:29.918 fused_ordering(676) 00:13:29.918 fused_ordering(677) 00:13:29.918 fused_ordering(678) 00:13:29.918 fused_ordering(679) 00:13:29.918 fused_ordering(680) 00:13:29.918 fused_ordering(681) 00:13:29.918 fused_ordering(682) 00:13:29.918 fused_ordering(683) 00:13:29.918 fused_ordering(684) 00:13:29.918 fused_ordering(685) 00:13:29.918 fused_ordering(686) 00:13:29.918 fused_ordering(687) 00:13:29.918 fused_ordering(688) 00:13:29.918 fused_ordering(689) 00:13:29.918 fused_ordering(690) 00:13:29.918 fused_ordering(691) 00:13:29.918 fused_ordering(692) 00:13:29.918 fused_ordering(693) 00:13:29.918 fused_ordering(694) 00:13:29.918 fused_ordering(695) 00:13:29.918 fused_ordering(696) 00:13:29.918 fused_ordering(697) 00:13:29.918 fused_ordering(698) 00:13:29.918 fused_ordering(699) 00:13:29.918 fused_ordering(700) 00:13:29.918 fused_ordering(701) 00:13:29.918 fused_ordering(702) 00:13:29.918 fused_ordering(703) 00:13:29.918 fused_ordering(704) 00:13:29.918 fused_ordering(705) 00:13:29.918 fused_ordering(706) 00:13:29.918 fused_ordering(707) 00:13:29.918 fused_ordering(708) 00:13:29.918 fused_ordering(709) 00:13:29.918 fused_ordering(710) 00:13:29.918 fused_ordering(711) 00:13:29.918 fused_ordering(712) 00:13:29.918 fused_ordering(713) 00:13:29.918 fused_ordering(714) 00:13:29.918 fused_ordering(715) 00:13:29.918 fused_ordering(716) 00:13:29.918 fused_ordering(717) 00:13:29.918 fused_ordering(718) 00:13:29.918 fused_ordering(719) 00:13:29.918 fused_ordering(720) 00:13:29.918 fused_ordering(721) 00:13:29.918 fused_ordering(722) 00:13:29.918 fused_ordering(723) 00:13:29.918 fused_ordering(724) 00:13:29.918 fused_ordering(725) 00:13:29.918 fused_ordering(726) 00:13:29.918 fused_ordering(727) 00:13:29.918 fused_ordering(728) 00:13:29.918 fused_ordering(729) 00:13:29.918 fused_ordering(730) 00:13:29.918 fused_ordering(731) 00:13:29.918 fused_ordering(732) 00:13:29.918 fused_ordering(733) 00:13:29.918 fused_ordering(734) 00:13:29.918 fused_ordering(735) 00:13:29.918 fused_ordering(736) 00:13:29.918 fused_ordering(737) 00:13:29.918 fused_ordering(738) 00:13:29.918 fused_ordering(739) 00:13:29.918 fused_ordering(740) 00:13:29.918 fused_ordering(741) 00:13:29.918 fused_ordering(742) 00:13:29.918 fused_ordering(743) 00:13:29.918 fused_ordering(744) 00:13:29.918 fused_ordering(745) 00:13:29.918 fused_ordering(746) 00:13:29.918 fused_ordering(747) 00:13:29.918 fused_ordering(748) 00:13:29.918 fused_ordering(749) 00:13:29.918 fused_ordering(750) 00:13:29.918 fused_ordering(751) 00:13:29.918 fused_ordering(752) 00:13:29.918 fused_ordering(753) 00:13:29.918 fused_ordering(754) 00:13:29.918 fused_ordering(755) 00:13:29.918 fused_ordering(756) 00:13:29.918 fused_ordering(757) 00:13:29.918 fused_ordering(758) 00:13:29.918 fused_ordering(759) 00:13:29.918 fused_ordering(760) 00:13:29.918 fused_ordering(761) 00:13:29.918 fused_ordering(762) 00:13:29.918 fused_ordering(763) 00:13:29.918 fused_ordering(764) 00:13:29.918 fused_ordering(765) 00:13:29.918 fused_ordering(766) 00:13:29.918 fused_ordering(767) 00:13:29.918 fused_ordering(768) 00:13:29.918 fused_ordering(769) 00:13:29.918 fused_ordering(770) 00:13:29.918 fused_ordering(771) 00:13:29.918 fused_ordering(772) 00:13:29.918 fused_ordering(773) 00:13:29.918 fused_ordering(774) 00:13:29.918 fused_ordering(775) 00:13:29.918 fused_ordering(776) 00:13:29.918 fused_ordering(777) 00:13:29.918 fused_ordering(778) 00:13:29.918 fused_ordering(779) 00:13:29.918 fused_ordering(780) 00:13:29.918 fused_ordering(781) 00:13:29.918 fused_ordering(782) 00:13:29.918 fused_ordering(783) 00:13:29.918 fused_ordering(784) 00:13:29.918 fused_ordering(785) 00:13:29.918 fused_ordering(786) 00:13:29.918 fused_ordering(787) 00:13:29.918 fused_ordering(788) 00:13:29.918 fused_ordering(789) 00:13:29.918 fused_ordering(790) 00:13:29.918 fused_ordering(791) 00:13:29.918 fused_ordering(792) 00:13:29.918 fused_ordering(793) 00:13:29.918 fused_ordering(794) 00:13:29.918 fused_ordering(795) 00:13:29.918 fused_ordering(796) 00:13:29.918 fused_ordering(797) 00:13:29.918 fused_ordering(798) 00:13:29.918 fused_ordering(799) 00:13:29.918 fused_ordering(800) 00:13:29.918 fused_ordering(801) 00:13:29.918 fused_ordering(802) 00:13:29.918 fused_ordering(803) 00:13:29.918 fused_ordering(804) 00:13:29.918 fused_ordering(805) 00:13:29.918 fused_ordering(806) 00:13:29.918 fused_ordering(807) 00:13:29.918 fused_ordering(808) 00:13:29.918 fused_ordering(809) 00:13:29.918 fused_ordering(810) 00:13:29.918 fused_ordering(811) 00:13:29.918 fused_ordering(812) 00:13:29.918 fused_ordering(813) 00:13:29.918 fused_ordering(814) 00:13:29.918 fused_ordering(815) 00:13:29.918 fused_ordering(816) 00:13:29.918 fused_ordering(817) 00:13:29.918 fused_ordering(818) 00:13:29.918 fused_ordering(819) 00:13:29.918 fused_ordering(820) 00:13:30.486 fused_ordering(821) 00:13:30.486 fused_ordering(822) 00:13:30.486 fused_ordering(823) 00:13:30.486 fused_ordering(824) 00:13:30.486 fused_ordering(825) 00:13:30.486 fused_ordering(826) 00:13:30.486 fused_ordering(827) 00:13:30.486 fused_ordering(828) 00:13:30.486 fused_ordering(829) 00:13:30.486 fused_ordering(830) 00:13:30.486 fused_ordering(831) 00:13:30.486 fused_ordering(832) 00:13:30.486 fused_ordering(833) 00:13:30.486 fused_ordering(834) 00:13:30.486 fused_ordering(835) 00:13:30.486 fused_ordering(836) 00:13:30.486 fused_ordering(837) 00:13:30.486 fused_ordering(838) 00:13:30.486 fused_ordering(839) 00:13:30.486 fused_ordering(840) 00:13:30.486 fused_ordering(841) 00:13:30.486 fused_ordering(842) 00:13:30.486 fused_ordering(843) 00:13:30.486 fused_ordering(844) 00:13:30.486 fused_ordering(845) 00:13:30.486 fused_ordering(846) 00:13:30.486 fused_ordering(847) 00:13:30.486 fused_ordering(848) 00:13:30.486 fused_ordering(849) 00:13:30.486 fused_ordering(850) 00:13:30.486 fused_ordering(851) 00:13:30.486 fused_ordering(852) 00:13:30.486 fused_ordering(853) 00:13:30.486 fused_ordering(854) 00:13:30.486 fused_ordering(855) 00:13:30.486 fused_ordering(856) 00:13:30.486 fused_ordering(857) 00:13:30.486 fused_ordering(858) 00:13:30.486 fused_ordering(859) 00:13:30.486 fused_ordering(860) 00:13:30.486 fused_ordering(861) 00:13:30.486 fused_ordering(862) 00:13:30.486 fused_ordering(863) 00:13:30.486 fused_ordering(864) 00:13:30.486 fused_ordering(865) 00:13:30.486 fused_ordering(866) 00:13:30.486 fused_ordering(867) 00:13:30.486 fused_ordering(868) 00:13:30.486 fused_ordering(869) 00:13:30.486 fused_ordering(870) 00:13:30.486 fused_ordering(871) 00:13:30.486 fused_ordering(872) 00:13:30.486 fused_ordering(873) 00:13:30.486 fused_ordering(874) 00:13:30.486 fused_ordering(875) 00:13:30.486 fused_ordering(876) 00:13:30.486 fused_ordering(877) 00:13:30.486 fused_ordering(878) 00:13:30.486 fused_ordering(879) 00:13:30.486 fused_ordering(880) 00:13:30.486 fused_ordering(881) 00:13:30.486 fused_ordering(882) 00:13:30.486 fused_ordering(883) 00:13:30.486 fused_ordering(884) 00:13:30.486 fused_ordering(885) 00:13:30.486 fused_ordering(886) 00:13:30.486 fused_ordering(887) 00:13:30.486 fused_ordering(888) 00:13:30.486 fused_ordering(889) 00:13:30.486 fused_ordering(890) 00:13:30.486 fused_ordering(891) 00:13:30.486 fused_ordering(892) 00:13:30.486 fused_ordering(893) 00:13:30.486 fused_ordering(894) 00:13:30.486 fused_ordering(895) 00:13:30.486 fused_ordering(896) 00:13:30.486 fused_ordering(897) 00:13:30.486 fused_ordering(898) 00:13:30.486 fused_ordering(899) 00:13:30.486 fused_ordering(900) 00:13:30.486 fused_ordering(901) 00:13:30.486 fused_ordering(902) 00:13:30.486 fused_ordering(903) 00:13:30.486 fused_ordering(904) 00:13:30.486 fused_ordering(905) 00:13:30.486 fused_ordering(906) 00:13:30.486 fused_ordering(907) 00:13:30.486 fused_ordering(908) 00:13:30.486 fused_ordering(909) 00:13:30.486 fused_ordering(910) 00:13:30.486 fused_ordering(911) 00:13:30.486 fused_ordering(912) 00:13:30.486 fused_ordering(913) 00:13:30.486 fused_ordering(914) 00:13:30.486 fused_ordering(915) 00:13:30.486 fused_ordering(916) 00:13:30.486 fused_ordering(917) 00:13:30.486 fused_ordering(918) 00:13:30.486 fused_ordering(919) 00:13:30.486 fused_ordering(920) 00:13:30.486 fused_ordering(921) 00:13:30.486 fused_ordering(922) 00:13:30.486 fused_ordering(923) 00:13:30.486 fused_ordering(924) 00:13:30.486 fused_ordering(925) 00:13:30.486 fused_ordering(926) 00:13:30.486 fused_ordering(927) 00:13:30.486 fused_ordering(928) 00:13:30.486 fused_ordering(929) 00:13:30.486 fused_ordering(930) 00:13:30.486 fused_ordering(931) 00:13:30.486 fused_ordering(932) 00:13:30.486 fused_ordering(933) 00:13:30.486 fused_ordering(934) 00:13:30.486 fused_ordering(935) 00:13:30.486 fused_ordering(936) 00:13:30.486 fused_ordering(937) 00:13:30.486 fused_ordering(938) 00:13:30.486 fused_ordering(939) 00:13:30.486 fused_ordering(940) 00:13:30.486 fused_ordering(941) 00:13:30.486 fused_ordering(942) 00:13:30.486 fused_ordering(943) 00:13:30.486 fused_ordering(944) 00:13:30.486 fused_ordering(945) 00:13:30.486 fused_ordering(946) 00:13:30.486 fused_ordering(947) 00:13:30.486 fused_ordering(948) 00:13:30.486 fused_ordering(949) 00:13:30.486 fused_ordering(950) 00:13:30.486 fused_ordering(951) 00:13:30.486 fused_ordering(952) 00:13:30.486 fused_ordering(953) 00:13:30.486 fused_ordering(954) 00:13:30.486 fused_ordering(955) 00:13:30.486 fused_ordering(956) 00:13:30.486 fused_ordering(957) 00:13:30.486 fused_ordering(958) 00:13:30.486 fused_ordering(959) 00:13:30.486 fused_ordering(960) 00:13:30.486 fused_ordering(961) 00:13:30.486 fused_ordering(962) 00:13:30.486 fused_ordering(963) 00:13:30.486 fused_ordering(964) 00:13:30.486 fused_ordering(965) 00:13:30.486 fused_ordering(966) 00:13:30.486 fused_ordering(967) 00:13:30.486 fused_ordering(968) 00:13:30.486 fused_ordering(969) 00:13:30.486 fused_ordering(970) 00:13:30.486 fused_ordering(971) 00:13:30.486 fused_ordering(972) 00:13:30.486 fused_ordering(973) 00:13:30.486 fused_ordering(974) 00:13:30.486 fused_ordering(975) 00:13:30.486 fused_ordering(976) 00:13:30.486 fused_ordering(977) 00:13:30.486 fused_ordering(978) 00:13:30.486 fused_ordering(979) 00:13:30.486 fused_ordering(980) 00:13:30.486 fused_ordering(981) 00:13:30.486 fused_ordering(982) 00:13:30.486 fused_ordering(983) 00:13:30.486 fused_ordering(984) 00:13:30.486 fused_ordering(985) 00:13:30.486 fused_ordering(986) 00:13:30.486 fused_ordering(987) 00:13:30.486 fused_ordering(988) 00:13:30.486 fused_ordering(989) 00:13:30.486 fused_ordering(990) 00:13:30.486 fused_ordering(991) 00:13:30.486 fused_ordering(992) 00:13:30.486 fused_ordering(993) 00:13:30.486 fused_ordering(994) 00:13:30.486 fused_ordering(995) 00:13:30.486 fused_ordering(996) 00:13:30.486 fused_ordering(997) 00:13:30.486 fused_ordering(998) 00:13:30.486 fused_ordering(999) 00:13:30.486 fused_ordering(1000) 00:13:30.486 fused_ordering(1001) 00:13:30.486 fused_ordering(1002) 00:13:30.486 fused_ordering(1003) 00:13:30.486 fused_ordering(1004) 00:13:30.486 fused_ordering(1005) 00:13:30.486 fused_ordering(1006) 00:13:30.486 fused_ordering(1007) 00:13:30.486 fused_ordering(1008) 00:13:30.486 fused_ordering(1009) 00:13:30.486 fused_ordering(1010) 00:13:30.486 fused_ordering(1011) 00:13:30.486 fused_ordering(1012) 00:13:30.487 fused_ordering(1013) 00:13:30.487 fused_ordering(1014) 00:13:30.487 fused_ordering(1015) 00:13:30.487 fused_ordering(1016) 00:13:30.487 fused_ordering(1017) 00:13:30.487 fused_ordering(1018) 00:13:30.487 fused_ordering(1019) 00:13:30.487 fused_ordering(1020) 00:13:30.487 fused_ordering(1021) 00:13:30.487 fused_ordering(1022) 00:13:30.487 fused_ordering(1023) 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:30.487 rmmod nvme_tcp 00:13:30.487 rmmod nvme_fabrics 00:13:30.487 rmmod nvme_keyring 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 76045 ']' 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 76045 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 76045 ']' 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 76045 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76045 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:30.487 killing process with pid 76045 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76045' 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 76045 00:13:30.487 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 76045 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:30.746 00:13:30.746 real 0m4.156s 00:13:30.746 user 0m5.127s 00:13:30.746 sys 0m1.334s 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:30.746 ************************************ 00:13:30.746 END TEST nvmf_fused_ordering 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.746 ************************************ 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:30.746 ************************************ 00:13:30.746 START TEST nvmf_ns_masking 00:13:30.746 ************************************ 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:30.746 * Looking for test storage... 00:13:30.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=cf3678ae-4a00-4d36-b422-4a5a5e906a81 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=02ab5fbb-3730-46d2-b1db-ba1c913742a6 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:30.746 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a610f7a2-eed1-4bf0-a97a-5f61b4389d39 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:30.747 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:31.005 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:31.005 Cannot find device "nvmf_tgt_br" 00:13:31.005 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:13:31.005 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:31.005 Cannot find device "nvmf_tgt_br2" 00:13:31.005 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:13:31.005 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:31.005 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:31.005 Cannot find device "nvmf_tgt_br" 00:13:31.005 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:13:31.005 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:31.005 Cannot find device "nvmf_tgt_br2" 00:13:31.005 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:13:31.005 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:31.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:31.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:31.005 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:31.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:13:31.264 00:13:31.264 --- 10.0.0.2 ping statistics --- 00:13:31.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.264 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:31.264 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:31.264 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:13:31.264 00:13:31.264 --- 10.0.0.3 ping statistics --- 00:13:31.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.264 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:31.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:13:31.264 00:13:31.264 --- 10.0.0.1 ping statistics --- 00:13:31.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.264 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=76310 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 76310 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 76310 ']' 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:31.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:31.264 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:31.264 [2024-07-25 05:19:35.348818] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:13:31.264 [2024-07-25 05:19:35.348923] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.523 [2024-07-25 05:19:35.490672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.523 [2024-07-25 05:19:35.549678] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.523 [2024-07-25 05:19:35.549729] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.523 [2024-07-25 05:19:35.549740] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.523 [2024-07-25 05:19:35.549749] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.523 [2024-07-25 05:19:35.549756] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.523 [2024-07-25 05:19:35.549783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.457 05:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:32.457 05:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:32.457 05:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:32.457 05:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:32.457 05:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:32.457 05:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.457 05:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:32.457 [2024-07-25 05:19:36.594339] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.457 05:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:32.457 05:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:32.457 05:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:32.715 Malloc1 00:13:32.715 05:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:32.973 Malloc2 00:13:32.973 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:33.232 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:33.490 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.749 [2024-07-25 05:19:37.821634] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.749 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:33.749 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a610f7a2-eed1-4bf0-a97a-5f61b4389d39 -a 10.0.0.2 -s 4420 -i 4 00:13:34.007 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:34.007 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:34.007 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.008 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:34.008 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:35.909 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:35.909 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:35.909 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:35.909 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:35.909 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.909 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:35.909 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:35.909 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:35.909 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:35.909 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:35.909 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:35.909 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.909 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:35.909 [ 0]:0x1 00:13:35.909 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:35.909 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:35.909 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1e1320c60de046b69dc9be2e90c3282b 00:13:35.909 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1e1320c60de046b69dc9be2e90c3282b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:35.909 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:36.476 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:36.476 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:36.476 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:36.476 [ 0]:0x1 00:13:36.476 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:36.476 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:36.476 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1e1320c60de046b69dc9be2e90c3282b 00:13:36.476 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1e1320c60de046b69dc9be2e90c3282b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:36.476 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:36.476 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:36.476 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:36.476 [ 1]:0x2 00:13:36.476 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:36.476 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:36.476 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07f00c9fb887452fb4cd8fc3bedb4fe5 00:13:36.476 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07f00c9fb887452fb4cd8fc3bedb4fe5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:36.476 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:36.476 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.476 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.735 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:37.001 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:37.002 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a610f7a2-eed1-4bf0-a97a-5f61b4389d39 -a 10.0.0.2 -s 4420 -i 4 00:13:37.002 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:37.002 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:37.002 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.002 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:37.002 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:37.002 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:39.530 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:39.531 [ 0]:0x2 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07f00c9fb887452fb4cd8fc3bedb4fe5 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07f00c9fb887452fb4cd8fc3bedb4fe5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:39.531 [ 0]:0x1 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1e1320c60de046b69dc9be2e90c3282b 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1e1320c60de046b69dc9be2e90c3282b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:39.531 [ 1]:0x2 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:39.531 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:39.789 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07f00c9fb887452fb4cd8fc3bedb4fe5 00:13:39.789 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07f00c9fb887452fb4cd8fc3bedb4fe5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.789 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:40.047 [ 0]:0x2 00:13:40.047 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:40.048 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.048 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07f00c9fb887452fb4cd8fc3bedb4fe5 00:13:40.048 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07f00c9fb887452fb4cd8fc3bedb4fe5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.048 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:40.048 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.048 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:40.306 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:40.306 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a610f7a2-eed1-4bf0-a97a-5f61b4389d39 -a 10.0.0.2 -s 4420 -i 4 00:13:40.564 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:40.564 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:40.564 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.564 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:40.564 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:40.564 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:42.465 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:42.465 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:42.465 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:42.465 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:42.465 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:42.465 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:42.465 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:42.465 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:42.465 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:42.465 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:42.465 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:42.753 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:42.753 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.753 [ 0]:0x1 00:13:42.753 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:42.753 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.753 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1e1320c60de046b69dc9be2e90c3282b 00:13:42.753 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1e1320c60de046b69dc9be2e90c3282b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.753 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:42.753 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:42.753 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.753 [ 1]:0x2 00:13:42.753 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:42.753 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.753 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07f00c9fb887452fb4cd8fc3bedb4fe5 00:13:42.753 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07f00c9fb887452fb4cd8fc3bedb4fe5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.753 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.012 [ 0]:0x2 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.012 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:43.271 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07f00c9fb887452fb4cd8fc3bedb4fe5 00:13:43.271 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07f00c9fb887452fb4cd8fc3bedb4fe5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.271 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:43.271 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:43.271 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:43.271 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:43.271 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.271 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:43.271 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.271 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:43.271 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.271 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:43.271 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:43.271 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:43.530 [2024-07-25 05:19:47.508939] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:43.530 2024/07/25 05:19:47 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:13:43.530 request: 00:13:43.530 { 00:13:43.530 "method": "nvmf_ns_remove_host", 00:13:43.530 "params": { 00:13:43.530 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:43.530 "nsid": 2, 00:13:43.530 "host": "nqn.2016-06.io.spdk:host1" 00:13:43.530 } 00:13:43.530 } 00:13:43.530 Got JSON-RPC error response 00:13:43.530 GoRPCClient: error on JSON-RPC call 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:43.530 [ 0]:0x2 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07f00c9fb887452fb4cd8fc3bedb4fe5 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07f00c9fb887452fb4cd8fc3bedb4fe5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=76694 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 76694 /var/tmp/host.sock 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 76694 ']' 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:43.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:43.530 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:43.789 [2024-07-25 05:19:47.774516] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:13:43.789 [2024-07-25 05:19:47.774669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76694 ] 00:13:43.789 [2024-07-25 05:19:47.940746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.047 [2024-07-25 05:19:48.014488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.612 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:44.612 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:44.612 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.179 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:45.179 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid cf3678ae-4a00-4d36-b422-4a5a5e906a81 00:13:45.180 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:45.180 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g CF3678AE4A004D36B4224A5A5E906A81 -i 00:13:45.771 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 02ab5fbb-3730-46d2-b1db-ba1c913742a6 00:13:45.771 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:45.771 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 02AB5FBB373046D2B1DBBA1C913742A6 -i 00:13:45.771 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:46.028 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:46.593 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:46.593 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:46.593 nvme0n1 00:13:46.851 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:46.851 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:47.108 nvme1n2 00:13:47.108 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:47.108 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:47.108 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:47.108 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:47.108 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:47.366 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:47.366 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:47.366 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:47.366 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:47.964 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ cf3678ae-4a00-4d36-b422-4a5a5e906a81 == \c\f\3\6\7\8\a\e\-\4\a\0\0\-\4\d\3\6\-\b\4\2\2\-\4\a\5\a\5\e\9\0\6\a\8\1 ]] 00:13:47.964 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:47.964 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:47.964 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:47.964 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 02ab5fbb-3730-46d2-b1db-ba1c913742a6 == \0\2\a\b\5\f\b\b\-\3\7\3\0\-\4\6\d\2\-\b\1\d\b\-\b\a\1\c\9\1\3\7\4\2\a\6 ]] 00:13:47.964 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 76694 00:13:47.964 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 76694 ']' 00:13:47.964 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 76694 00:13:47.964 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:13:47.964 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:47.964 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76694 00:13:47.964 killing process with pid 76694 00:13:47.964 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:47.964 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:47.964 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76694' 00:13:47.964 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 76694 00:13:47.964 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 76694 00:13:48.223 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.481 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:48.481 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:48.481 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:48.481 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:48.739 rmmod nvme_tcp 00:13:48.739 rmmod nvme_fabrics 00:13:48.739 rmmod nvme_keyring 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 76310 ']' 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 76310 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 76310 ']' 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 76310 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76310 00:13:48.739 killing process with pid 76310 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76310' 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 76310 00:13:48.739 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 76310 00:13:48.998 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:48.998 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:48.998 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:48.998 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.998 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:48.998 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.998 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.998 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.998 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:48.999 00:13:48.999 real 0m18.168s 00:13:48.999 user 0m29.503s 00:13:48.999 sys 0m2.527s 00:13:48.999 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.999 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:48.999 ************************************ 00:13:48.999 END TEST nvmf_ns_masking 00:13:48.999 ************************************ 00:13:48.999 05:19:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:13:48.999 05:19:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:13:48.999 05:19:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:48.999 05:19:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:48.999 05:19:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.999 05:19:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:48.999 ************************************ 00:13:48.999 START TEST nvmf_auth_target 00:13:48.999 ************************************ 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:48.999 * Looking for test storage... 00:13:48.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:48.999 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:49.000 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:49.000 Cannot find device "nvmf_tgt_br" 00:13:49.000 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:13:49.000 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:49.000 Cannot find device "nvmf_tgt_br2" 00:13:49.000 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:13:49.000 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:49.000 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:49.000 Cannot find device "nvmf_tgt_br" 00:13:49.000 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:13:49.000 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:49.000 Cannot find device "nvmf_tgt_br2" 00:13:49.000 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:13:49.000 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:49.259 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:49.259 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:49.259 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:49.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:13:49.517 00:13:49.517 --- 10.0.0.2 ping statistics --- 00:13:49.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.517 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:49.517 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:49.517 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:13:49.517 00:13:49.517 --- 10.0.0.3 ping statistics --- 00:13:49.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.517 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:49.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:49.517 00:13:49.517 --- 10.0.0.1 ping statistics --- 00:13:49.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.517 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=77053 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 77053 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 77053 ']' 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:49.517 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=77103 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=14929d3e2c215d894fda1862299c9923a90818024b8a7692 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.uDA 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 14929d3e2c215d894fda1862299c9923a90818024b8a7692 0 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 14929d3e2c215d894fda1862299c9923a90818024b8a7692 0 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=14929d3e2c215d894fda1862299c9923a90818024b8a7692 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:50.451 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.uDA 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.uDA 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.uDA 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=24223ef551bb271693bd5d764b2f1e5aeadfb1ccd06940069d878b77e8793db6 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.tBz 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 24223ef551bb271693bd5d764b2f1e5aeadfb1ccd06940069d878b77e8793db6 3 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 24223ef551bb271693bd5d764b2f1e5aeadfb1ccd06940069d878b77e8793db6 3 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=24223ef551bb271693bd5d764b2f1e5aeadfb1ccd06940069d878b77e8793db6 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.tBz 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.tBz 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.tBz 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a2054d0e24a2c15b85c9f45a01da3d27 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.UoM 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a2054d0e24a2c15b85c9f45a01da3d27 1 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a2054d0e24a2c15b85c9f45a01da3d27 1 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a2054d0e24a2c15b85c9f45a01da3d27 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.UoM 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.UoM 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.UoM 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:50.709 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:50.710 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=19424b4234b7ffdd51eb8b42afb1b16b3e845d4dbdf1e3fa 00:13:50.710 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:50.710 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.mSq 00:13:50.710 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 19424b4234b7ffdd51eb8b42afb1b16b3e845d4dbdf1e3fa 2 00:13:50.710 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 19424b4234b7ffdd51eb8b42afb1b16b3e845d4dbdf1e3fa 2 00:13:50.710 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:50.710 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:50.710 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=19424b4234b7ffdd51eb8b42afb1b16b3e845d4dbdf1e3fa 00:13:50.710 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:50.710 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:50.710 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.mSq 00:13:50.710 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.mSq 00:13:50.710 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.mSq 00:13:50.710 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:50.710 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:50.710 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=22452ac9b90c0a665d38c4fec90000a066ef8ca54d1a90cb 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.YMh 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 22452ac9b90c0a665d38c4fec90000a066ef8ca54d1a90cb 2 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 22452ac9b90c0a665d38c4fec90000a066ef8ca54d1a90cb 2 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=22452ac9b90c0a665d38c4fec90000a066ef8ca54d1a90cb 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.YMh 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.YMh 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.YMh 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:50.968 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ed8872a446c811dd237b1fe080decb76 00:13:50.969 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:50.969 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.QMI 00:13:50.969 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ed8872a446c811dd237b1fe080decb76 1 00:13:50.969 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ed8872a446c811dd237b1fe080decb76 1 00:13:50.969 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:50.969 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:50.969 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ed8872a446c811dd237b1fe080decb76 00:13:50.969 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:50.969 05:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.QMI 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.QMI 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.QMI 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6369759238a97377f775d0206a759702eacd673a4db4f76d2ad9c6c088a47a1b 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.4oU 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6369759238a97377f775d0206a759702eacd673a4db4f76d2ad9c6c088a47a1b 3 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6369759238a97377f775d0206a759702eacd673a4db4f76d2ad9c6c088a47a1b 3 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6369759238a97377f775d0206a759702eacd673a4db4f76d2ad9c6c088a47a1b 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.4oU 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.4oU 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.4oU 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 77053 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 77053 ']' 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:50.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:50.969 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.536 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.536 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:51.536 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 77103 /var/tmp/host.sock 00:13:51.536 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 77103 ']' 00:13:51.536 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:51.536 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:51.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:51.536 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:51.536 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:51.536 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.794 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.794 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:51.795 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:51.795 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.795 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.795 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.795 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:51.795 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uDA 00:13:51.795 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.795 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.795 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.795 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.uDA 00:13:51.795 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.uDA 00:13:52.052 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.tBz ]] 00:13:52.052 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tBz 00:13:52.052 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.052 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.053 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.053 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tBz 00:13:52.053 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tBz 00:13:52.619 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:52.619 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.UoM 00:13:52.619 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.619 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.619 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.619 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.UoM 00:13:52.619 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.UoM 00:13:52.877 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.mSq ]] 00:13:52.877 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mSq 00:13:52.877 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.877 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.877 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.877 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mSq 00:13:52.877 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mSq 00:13:53.135 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:53.135 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.YMh 00:13:53.135 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.135 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.135 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.135 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.YMh 00:13:53.135 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.YMh 00:13:53.404 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.QMI ]] 00:13:53.404 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QMI 00:13:53.404 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.404 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.404 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.404 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QMI 00:13:53.404 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QMI 00:13:53.678 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:53.678 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.4oU 00:13:53.678 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.678 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.678 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.678 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.4oU 00:13:53.678 05:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.4oU 00:13:54.244 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:54.244 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:54.244 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:54.244 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:54.244 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:54.244 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:54.503 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:54.503 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:54.503 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:54.503 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:54.503 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:54.503 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.503 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.503 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.503 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.503 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.503 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.503 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.761 00:13:54.761 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:54.761 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:54.761 05:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.019 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.019 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.019 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.019 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.019 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.019 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.019 { 00:13:55.019 "auth": { 00:13:55.019 "dhgroup": "null", 00:13:55.019 "digest": "sha256", 00:13:55.019 "state": "completed" 00:13:55.019 }, 00:13:55.019 "cntlid": 1, 00:13:55.019 "listen_address": { 00:13:55.019 "adrfam": "IPv4", 00:13:55.019 "traddr": "10.0.0.2", 00:13:55.019 "trsvcid": "4420", 00:13:55.019 "trtype": "TCP" 00:13:55.019 }, 00:13:55.019 "peer_address": { 00:13:55.019 "adrfam": "IPv4", 00:13:55.019 "traddr": "10.0.0.1", 00:13:55.019 "trsvcid": "37224", 00:13:55.019 "trtype": "TCP" 00:13:55.019 }, 00:13:55.019 "qid": 0, 00:13:55.019 "state": "enabled", 00:13:55.019 "thread": "nvmf_tgt_poll_group_000" 00:13:55.019 } 00:13:55.019 ]' 00:13:55.019 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.019 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.019 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.277 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:55.277 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.277 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.277 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.277 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.535 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.825 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:00.825 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.084 05:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.084 05:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.084 05:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.084 05:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.084 05:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.084 05:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:01.084 { 00:14:01.084 "auth": { 00:14:01.084 "dhgroup": "null", 00:14:01.084 "digest": "sha256", 00:14:01.084 "state": "completed" 00:14:01.084 }, 00:14:01.084 "cntlid": 3, 00:14:01.084 "listen_address": { 00:14:01.084 "adrfam": "IPv4", 00:14:01.084 "traddr": "10.0.0.2", 00:14:01.084 "trsvcid": "4420", 00:14:01.084 "trtype": "TCP" 00:14:01.084 }, 00:14:01.084 "peer_address": { 00:14:01.084 "adrfam": "IPv4", 00:14:01.084 "traddr": "10.0.0.1", 00:14:01.084 "trsvcid": "37022", 00:14:01.084 "trtype": "TCP" 00:14:01.084 }, 00:14:01.084 "qid": 0, 00:14:01.084 "state": "enabled", 00:14:01.084 "thread": "nvmf_tgt_poll_group_000" 00:14:01.084 } 00:14:01.084 ]' 00:14:01.084 05:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:01.084 05:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.084 05:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:01.343 05:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:01.343 05:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:01.343 05:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.343 05:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.343 05:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.602 05:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:14:02.536 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.536 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:02.536 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.536 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.536 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.536 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:02.536 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:02.536 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:02.795 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:02.795 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:02.795 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:02.795 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:02.795 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:02.795 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.795 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.795 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.795 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.795 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.795 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.795 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.063 00:14:03.063 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:03.063 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.063 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:03.360 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.360 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.360 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.360 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.360 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.360 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:03.360 { 00:14:03.360 "auth": { 00:14:03.360 "dhgroup": "null", 00:14:03.360 "digest": "sha256", 00:14:03.360 "state": "completed" 00:14:03.360 }, 00:14:03.360 "cntlid": 5, 00:14:03.360 "listen_address": { 00:14:03.360 "adrfam": "IPv4", 00:14:03.360 "traddr": "10.0.0.2", 00:14:03.360 "trsvcid": "4420", 00:14:03.360 "trtype": "TCP" 00:14:03.360 }, 00:14:03.360 "peer_address": { 00:14:03.360 "adrfam": "IPv4", 00:14:03.360 "traddr": "10.0.0.1", 00:14:03.360 "trsvcid": "37050", 00:14:03.360 "trtype": "TCP" 00:14:03.360 }, 00:14:03.360 "qid": 0, 00:14:03.360 "state": "enabled", 00:14:03.360 "thread": "nvmf_tgt_poll_group_000" 00:14:03.360 } 00:14:03.360 ]' 00:14:03.360 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:03.360 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:03.360 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:03.618 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:03.618 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:03.618 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.618 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.618 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.875 05:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:14:04.811 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.811 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:04.811 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.811 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.812 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.812 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:04.812 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:04.812 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:04.812 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:04.812 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:04.812 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:04.812 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:04.812 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:04.812 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.812 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:14:04.812 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.812 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.812 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.812 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:04.812 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:05.070 00:14:05.328 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:05.328 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.328 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:05.587 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.587 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.587 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.587 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.587 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.587 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:05.587 { 00:14:05.587 "auth": { 00:14:05.587 "dhgroup": "null", 00:14:05.587 "digest": "sha256", 00:14:05.587 "state": "completed" 00:14:05.587 }, 00:14:05.587 "cntlid": 7, 00:14:05.587 "listen_address": { 00:14:05.587 "adrfam": "IPv4", 00:14:05.587 "traddr": "10.0.0.2", 00:14:05.587 "trsvcid": "4420", 00:14:05.587 "trtype": "TCP" 00:14:05.587 }, 00:14:05.587 "peer_address": { 00:14:05.587 "adrfam": "IPv4", 00:14:05.587 "traddr": "10.0.0.1", 00:14:05.587 "trsvcid": "37090", 00:14:05.587 "trtype": "TCP" 00:14:05.587 }, 00:14:05.587 "qid": 0, 00:14:05.587 "state": "enabled", 00:14:05.587 "thread": "nvmf_tgt_poll_group_000" 00:14:05.587 } 00:14:05.587 ]' 00:14:05.587 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:05.587 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.587 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:05.587 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:05.587 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:05.587 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.587 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.587 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.847 05:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:14:06.792 05:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.792 05:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:06.792 05:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.792 05:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.792 05:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.792 05:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:06.792 05:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.792 05:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:06.792 05:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:07.050 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:07.050 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:07.050 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:07.050 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:07.050 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:07.050 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.050 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.050 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.050 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.050 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.050 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.050 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.308 00:14:07.308 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:07.308 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.308 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.566 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.566 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.566 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.566 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.566 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.566 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:07.566 { 00:14:07.566 "auth": { 00:14:07.566 "dhgroup": "ffdhe2048", 00:14:07.566 "digest": "sha256", 00:14:07.566 "state": "completed" 00:14:07.566 }, 00:14:07.566 "cntlid": 9, 00:14:07.566 "listen_address": { 00:14:07.566 "adrfam": "IPv4", 00:14:07.566 "traddr": "10.0.0.2", 00:14:07.566 "trsvcid": "4420", 00:14:07.566 "trtype": "TCP" 00:14:07.566 }, 00:14:07.566 "peer_address": { 00:14:07.566 "adrfam": "IPv4", 00:14:07.566 "traddr": "10.0.0.1", 00:14:07.566 "trsvcid": "37102", 00:14:07.566 "trtype": "TCP" 00:14:07.566 }, 00:14:07.566 "qid": 0, 00:14:07.566 "state": "enabled", 00:14:07.566 "thread": "nvmf_tgt_poll_group_000" 00:14:07.566 } 00:14:07.566 ]' 00:14:07.566 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.566 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.566 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.824 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:07.824 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.824 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.824 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.824 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.082 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:14:09.015 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.015 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:09.015 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.015 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.015 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.015 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:09.015 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:09.015 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:09.273 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:09.273 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:09.273 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:09.273 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:09.273 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:09.273 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.273 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.273 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.273 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.273 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.273 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.273 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.556 00:14:09.556 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:09.556 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.556 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:09.812 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.812 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.812 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.812 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.812 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.812 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:09.812 { 00:14:09.812 "auth": { 00:14:09.812 "dhgroup": "ffdhe2048", 00:14:09.812 "digest": "sha256", 00:14:09.812 "state": "completed" 00:14:09.812 }, 00:14:09.812 "cntlid": 11, 00:14:09.812 "listen_address": { 00:14:09.812 "adrfam": "IPv4", 00:14:09.812 "traddr": "10.0.0.2", 00:14:09.812 "trsvcid": "4420", 00:14:09.812 "trtype": "TCP" 00:14:09.812 }, 00:14:09.813 "peer_address": { 00:14:09.813 "adrfam": "IPv4", 00:14:09.813 "traddr": "10.0.0.1", 00:14:09.813 "trsvcid": "51548", 00:14:09.813 "trtype": "TCP" 00:14:09.813 }, 00:14:09.813 "qid": 0, 00:14:09.813 "state": "enabled", 00:14:09.813 "thread": "nvmf_tgt_poll_group_000" 00:14:09.813 } 00:14:09.813 ]' 00:14:09.813 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.813 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:09.813 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:10.070 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:10.070 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:10.070 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.070 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.070 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.328 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:14:10.893 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.152 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:11.152 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.152 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.152 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.152 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:11.152 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:11.152 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:11.410 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:11.410 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:11.410 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:11.410 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:11.410 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:11.410 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.410 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.410 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.410 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.410 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.410 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.410 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.668 00:14:11.668 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:11.668 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.668 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.926 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.926 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.926 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.926 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.926 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.926 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.926 { 00:14:11.926 "auth": { 00:14:11.926 "dhgroup": "ffdhe2048", 00:14:11.926 "digest": "sha256", 00:14:11.926 "state": "completed" 00:14:11.926 }, 00:14:11.926 "cntlid": 13, 00:14:11.926 "listen_address": { 00:14:11.926 "adrfam": "IPv4", 00:14:11.926 "traddr": "10.0.0.2", 00:14:11.926 "trsvcid": "4420", 00:14:11.926 "trtype": "TCP" 00:14:11.926 }, 00:14:11.926 "peer_address": { 00:14:11.926 "adrfam": "IPv4", 00:14:11.926 "traddr": "10.0.0.1", 00:14:11.926 "trsvcid": "51568", 00:14:11.926 "trtype": "TCP" 00:14:11.926 }, 00:14:11.926 "qid": 0, 00:14:11.926 "state": "enabled", 00:14:11.926 "thread": "nvmf_tgt_poll_group_000" 00:14:11.926 } 00:14:11.926 ]' 00:14:11.926 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.926 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.926 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:12.183 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:12.183 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:12.183 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.183 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.183 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.441 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:14:13.032 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.032 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:13.032 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.033 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.033 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.033 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.033 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:13.033 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:13.291 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:13.291 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:13.291 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:13.291 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:13.291 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:13.291 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.291 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:14:13.291 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.291 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.291 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.291 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:13.291 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:13.549 00:14:13.549 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.549 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.549 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.115 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.115 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.115 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.115 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.115 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.115 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.115 { 00:14:14.115 "auth": { 00:14:14.115 "dhgroup": "ffdhe2048", 00:14:14.115 "digest": "sha256", 00:14:14.115 "state": "completed" 00:14:14.115 }, 00:14:14.115 "cntlid": 15, 00:14:14.115 "listen_address": { 00:14:14.115 "adrfam": "IPv4", 00:14:14.115 "traddr": "10.0.0.2", 00:14:14.115 "trsvcid": "4420", 00:14:14.115 "trtype": "TCP" 00:14:14.115 }, 00:14:14.115 "peer_address": { 00:14:14.115 "adrfam": "IPv4", 00:14:14.115 "traddr": "10.0.0.1", 00:14:14.115 "trsvcid": "51594", 00:14:14.115 "trtype": "TCP" 00:14:14.115 }, 00:14:14.115 "qid": 0, 00:14:14.115 "state": "enabled", 00:14:14.115 "thread": "nvmf_tgt_poll_group_000" 00:14:14.115 } 00:14:14.115 ]' 00:14:14.115 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.115 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.115 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.115 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:14.115 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.115 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.115 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.115 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.373 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:14:15.307 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.307 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:15.307 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.307 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.307 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.307 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:15.307 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.307 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:15.307 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:15.565 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:15.565 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.565 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:15.565 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:15.565 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:15.565 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.565 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.565 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.565 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.565 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.565 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.565 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.824 00:14:15.824 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.824 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.824 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.084 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.084 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.084 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.084 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.084 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.084 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.084 { 00:14:16.084 "auth": { 00:14:16.084 "dhgroup": "ffdhe3072", 00:14:16.084 "digest": "sha256", 00:14:16.084 "state": "completed" 00:14:16.084 }, 00:14:16.084 "cntlid": 17, 00:14:16.084 "listen_address": { 00:14:16.084 "adrfam": "IPv4", 00:14:16.084 "traddr": "10.0.0.2", 00:14:16.084 "trsvcid": "4420", 00:14:16.084 "trtype": "TCP" 00:14:16.084 }, 00:14:16.084 "peer_address": { 00:14:16.084 "adrfam": "IPv4", 00:14:16.084 "traddr": "10.0.0.1", 00:14:16.084 "trsvcid": "51618", 00:14:16.084 "trtype": "TCP" 00:14:16.084 }, 00:14:16.084 "qid": 0, 00:14:16.084 "state": "enabled", 00:14:16.084 "thread": "nvmf_tgt_poll_group_000" 00:14:16.084 } 00:14:16.084 ]' 00:14:16.084 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.084 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.084 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.351 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:16.351 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.351 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.351 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.351 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.609 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:14:17.176 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.176 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:17.176 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.176 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.176 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.176 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.176 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:17.176 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:17.743 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:17.743 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.743 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:17.743 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:17.743 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:17.743 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.743 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.743 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.743 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.743 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.743 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.743 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.002 00:14:18.002 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:18.002 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.002 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:18.260 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.260 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.260 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.260 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.260 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.260 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.260 { 00:14:18.260 "auth": { 00:14:18.260 "dhgroup": "ffdhe3072", 00:14:18.260 "digest": "sha256", 00:14:18.260 "state": "completed" 00:14:18.260 }, 00:14:18.260 "cntlid": 19, 00:14:18.260 "listen_address": { 00:14:18.260 "adrfam": "IPv4", 00:14:18.260 "traddr": "10.0.0.2", 00:14:18.260 "trsvcid": "4420", 00:14:18.260 "trtype": "TCP" 00:14:18.260 }, 00:14:18.260 "peer_address": { 00:14:18.260 "adrfam": "IPv4", 00:14:18.260 "traddr": "10.0.0.1", 00:14:18.260 "trsvcid": "51640", 00:14:18.260 "trtype": "TCP" 00:14:18.260 }, 00:14:18.260 "qid": 0, 00:14:18.260 "state": "enabled", 00:14:18.260 "thread": "nvmf_tgt_poll_group_000" 00:14:18.260 } 00:14:18.260 ]' 00:14:18.260 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.260 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.260 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.519 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:18.519 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.519 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.519 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.519 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.777 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:14:19.709 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.709 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:19.709 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.709 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.709 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.709 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.709 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:19.709 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:19.709 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:19.709 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.709 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:19.709 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:19.709 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:19.709 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.709 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.709 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.709 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.967 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.967 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.967 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.224 00:14:20.224 05:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.224 05:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.224 05:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.482 05:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.482 05:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.482 05:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.482 05:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.482 05:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.482 05:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.482 { 00:14:20.482 "auth": { 00:14:20.482 "dhgroup": "ffdhe3072", 00:14:20.482 "digest": "sha256", 00:14:20.482 "state": "completed" 00:14:20.482 }, 00:14:20.482 "cntlid": 21, 00:14:20.482 "listen_address": { 00:14:20.482 "adrfam": "IPv4", 00:14:20.482 "traddr": "10.0.0.2", 00:14:20.482 "trsvcid": "4420", 00:14:20.482 "trtype": "TCP" 00:14:20.482 }, 00:14:20.482 "peer_address": { 00:14:20.482 "adrfam": "IPv4", 00:14:20.482 "traddr": "10.0.0.1", 00:14:20.482 "trsvcid": "42880", 00:14:20.482 "trtype": "TCP" 00:14:20.482 }, 00:14:20.482 "qid": 0, 00:14:20.482 "state": "enabled", 00:14:20.482 "thread": "nvmf_tgt_poll_group_000" 00:14:20.482 } 00:14:20.482 ]' 00:14:20.482 05:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.482 05:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.482 05:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.740 05:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:20.740 05:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.740 05:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.740 05:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.740 05:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.997 05:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:14:21.930 05:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.930 05:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:21.930 05:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.930 05:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.930 05:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.930 05:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.930 05:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:21.930 05:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:22.188 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:22.188 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.188 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:22.188 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:22.188 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:22.188 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.188 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:14:22.188 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.188 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.188 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.188 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:22.188 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:22.780 00:14:22.780 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.780 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.780 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.038 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.038 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.038 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.038 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.038 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.038 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:23.038 { 00:14:23.038 "auth": { 00:14:23.038 "dhgroup": "ffdhe3072", 00:14:23.038 "digest": "sha256", 00:14:23.038 "state": "completed" 00:14:23.038 }, 00:14:23.038 "cntlid": 23, 00:14:23.038 "listen_address": { 00:14:23.038 "adrfam": "IPv4", 00:14:23.038 "traddr": "10.0.0.2", 00:14:23.038 "trsvcid": "4420", 00:14:23.038 "trtype": "TCP" 00:14:23.038 }, 00:14:23.038 "peer_address": { 00:14:23.038 "adrfam": "IPv4", 00:14:23.038 "traddr": "10.0.0.1", 00:14:23.038 "trsvcid": "42914", 00:14:23.038 "trtype": "TCP" 00:14:23.038 }, 00:14:23.038 "qid": 0, 00:14:23.038 "state": "enabled", 00:14:23.038 "thread": "nvmf_tgt_poll_group_000" 00:14:23.038 } 00:14:23.038 ]' 00:14:23.038 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:23.038 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:23.038 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:23.038 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:23.038 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:23.038 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.038 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.038 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.296 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:14:24.228 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.228 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:24.228 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.228 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.228 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.228 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:24.228 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:24.228 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:24.228 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:24.485 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:24.485 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.485 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:24.485 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:24.485 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:24.485 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.485 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.485 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.485 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.485 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.485 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.485 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.049 00:14:25.049 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:25.049 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:25.049 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.307 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.307 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.307 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.307 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.307 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.307 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:25.307 { 00:14:25.307 "auth": { 00:14:25.307 "dhgroup": "ffdhe4096", 00:14:25.307 "digest": "sha256", 00:14:25.307 "state": "completed" 00:14:25.307 }, 00:14:25.307 "cntlid": 25, 00:14:25.307 "listen_address": { 00:14:25.307 "adrfam": "IPv4", 00:14:25.307 "traddr": "10.0.0.2", 00:14:25.308 "trsvcid": "4420", 00:14:25.308 "trtype": "TCP" 00:14:25.308 }, 00:14:25.308 "peer_address": { 00:14:25.308 "adrfam": "IPv4", 00:14:25.308 "traddr": "10.0.0.1", 00:14:25.308 "trsvcid": "42940", 00:14:25.308 "trtype": "TCP" 00:14:25.308 }, 00:14:25.308 "qid": 0, 00:14:25.308 "state": "enabled", 00:14:25.308 "thread": "nvmf_tgt_poll_group_000" 00:14:25.308 } 00:14:25.308 ]' 00:14:25.308 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:25.308 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.308 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:25.566 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:25.566 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:25.566 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.566 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.566 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.824 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:14:26.786 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.786 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:26.786 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.786 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.786 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.786 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:26.786 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:26.786 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:27.045 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:27.045 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:27.045 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:27.045 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:27.045 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:27.045 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.045 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.045 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.045 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.045 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.045 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.045 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.303 00:14:27.303 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:27.303 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:27.303 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.562 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.562 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.562 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.562 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.562 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.562 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:27.562 { 00:14:27.562 "auth": { 00:14:27.562 "dhgroup": "ffdhe4096", 00:14:27.562 "digest": "sha256", 00:14:27.562 "state": "completed" 00:14:27.562 }, 00:14:27.562 "cntlid": 27, 00:14:27.562 "listen_address": { 00:14:27.562 "adrfam": "IPv4", 00:14:27.562 "traddr": "10.0.0.2", 00:14:27.562 "trsvcid": "4420", 00:14:27.562 "trtype": "TCP" 00:14:27.562 }, 00:14:27.562 "peer_address": { 00:14:27.562 "adrfam": "IPv4", 00:14:27.562 "traddr": "10.0.0.1", 00:14:27.562 "trsvcid": "42974", 00:14:27.562 "trtype": "TCP" 00:14:27.562 }, 00:14:27.562 "qid": 0, 00:14:27.562 "state": "enabled", 00:14:27.562 "thread": "nvmf_tgt_poll_group_000" 00:14:27.562 } 00:14:27.562 ]' 00:14:27.562 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:27.820 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:27.820 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:27.820 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:27.820 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:27.820 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.820 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.820 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.079 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:14:29.011 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.011 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:29.011 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.011 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.011 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.011 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.011 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:29.011 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:29.269 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:29.269 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:29.269 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:29.269 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:29.269 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:29.269 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.269 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.269 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.269 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.270 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.270 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.270 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.834 00:14:29.834 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:29.834 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:29.834 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.093 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.093 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.093 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.093 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.093 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.093 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.093 { 00:14:30.093 "auth": { 00:14:30.093 "dhgroup": "ffdhe4096", 00:14:30.093 "digest": "sha256", 00:14:30.093 "state": "completed" 00:14:30.093 }, 00:14:30.093 "cntlid": 29, 00:14:30.093 "listen_address": { 00:14:30.093 "adrfam": "IPv4", 00:14:30.093 "traddr": "10.0.0.2", 00:14:30.093 "trsvcid": "4420", 00:14:30.093 "trtype": "TCP" 00:14:30.093 }, 00:14:30.093 "peer_address": { 00:14:30.093 "adrfam": "IPv4", 00:14:30.093 "traddr": "10.0.0.1", 00:14:30.093 "trsvcid": "49762", 00:14:30.093 "trtype": "TCP" 00:14:30.093 }, 00:14:30.093 "qid": 0, 00:14:30.093 "state": "enabled", 00:14:30.093 "thread": "nvmf_tgt_poll_group_000" 00:14:30.093 } 00:14:30.093 ]' 00:14:30.093 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.093 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.093 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.351 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:30.351 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.351 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.351 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.351 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.609 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:14:31.542 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:31.543 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.109 00:14:32.109 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.109 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.109 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.367 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.367 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.367 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.367 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.367 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.367 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.367 { 00:14:32.367 "auth": { 00:14:32.367 "dhgroup": "ffdhe4096", 00:14:32.367 "digest": "sha256", 00:14:32.367 "state": "completed" 00:14:32.367 }, 00:14:32.367 "cntlid": 31, 00:14:32.367 "listen_address": { 00:14:32.367 "adrfam": "IPv4", 00:14:32.367 "traddr": "10.0.0.2", 00:14:32.367 "trsvcid": "4420", 00:14:32.367 "trtype": "TCP" 00:14:32.367 }, 00:14:32.367 "peer_address": { 00:14:32.367 "adrfam": "IPv4", 00:14:32.367 "traddr": "10.0.0.1", 00:14:32.367 "trsvcid": "49794", 00:14:32.367 "trtype": "TCP" 00:14:32.367 }, 00:14:32.367 "qid": 0, 00:14:32.367 "state": "enabled", 00:14:32.367 "thread": "nvmf_tgt_poll_group_000" 00:14:32.367 } 00:14:32.367 ]' 00:14:32.367 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.367 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.367 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.367 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:32.367 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.367 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.367 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.367 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.625 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.559 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.125 00:14:34.125 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.125 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.125 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.383 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.383 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.383 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.383 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.383 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.383 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.383 { 00:14:34.383 "auth": { 00:14:34.383 "dhgroup": "ffdhe6144", 00:14:34.383 "digest": "sha256", 00:14:34.383 "state": "completed" 00:14:34.383 }, 00:14:34.383 "cntlid": 33, 00:14:34.383 "listen_address": { 00:14:34.383 "adrfam": "IPv4", 00:14:34.383 "traddr": "10.0.0.2", 00:14:34.383 "trsvcid": "4420", 00:14:34.383 "trtype": "TCP" 00:14:34.383 }, 00:14:34.383 "peer_address": { 00:14:34.383 "adrfam": "IPv4", 00:14:34.383 "traddr": "10.0.0.1", 00:14:34.383 "trsvcid": "49832", 00:14:34.383 "trtype": "TCP" 00:14:34.383 }, 00:14:34.383 "qid": 0, 00:14:34.383 "state": "enabled", 00:14:34.383 "thread": "nvmf_tgt_poll_group_000" 00:14:34.383 } 00:14:34.383 ]' 00:14:34.383 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.641 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.641 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.641 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:34.641 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.641 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.641 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.641 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.900 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:14:35.836 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.836 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:35.836 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.836 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.836 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.836 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.836 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:35.836 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:36.095 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:36.095 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.095 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.095 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:36.095 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:36.095 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.095 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.095 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.095 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.095 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.095 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.095 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.353 00:14:36.353 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.353 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.353 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.611 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.611 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.611 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.611 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.869 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.869 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.869 { 00:14:36.869 "auth": { 00:14:36.869 "dhgroup": "ffdhe6144", 00:14:36.869 "digest": "sha256", 00:14:36.869 "state": "completed" 00:14:36.869 }, 00:14:36.869 "cntlid": 35, 00:14:36.869 "listen_address": { 00:14:36.869 "adrfam": "IPv4", 00:14:36.869 "traddr": "10.0.0.2", 00:14:36.869 "trsvcid": "4420", 00:14:36.869 "trtype": "TCP" 00:14:36.869 }, 00:14:36.869 "peer_address": { 00:14:36.869 "adrfam": "IPv4", 00:14:36.869 "traddr": "10.0.0.1", 00:14:36.869 "trsvcid": "49858", 00:14:36.869 "trtype": "TCP" 00:14:36.869 }, 00:14:36.869 "qid": 0, 00:14:36.869 "state": "enabled", 00:14:36.869 "thread": "nvmf_tgt_poll_group_000" 00:14:36.869 } 00:14:36.869 ]' 00:14:36.869 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.869 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.869 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.869 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:36.869 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.869 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.869 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.869 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.127 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:14:38.061 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.061 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:38.061 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.061 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.061 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.061 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.061 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:38.061 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:38.319 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:38.319 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.319 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:38.319 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:38.319 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:38.319 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.319 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.319 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.319 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.319 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.319 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.319 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.886 00:14:38.886 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.886 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.886 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.451 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.451 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.451 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.451 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.451 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.451 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.451 { 00:14:39.451 "auth": { 00:14:39.451 "dhgroup": "ffdhe6144", 00:14:39.451 "digest": "sha256", 00:14:39.451 "state": "completed" 00:14:39.451 }, 00:14:39.451 "cntlid": 37, 00:14:39.451 "listen_address": { 00:14:39.451 "adrfam": "IPv4", 00:14:39.451 "traddr": "10.0.0.2", 00:14:39.451 "trsvcid": "4420", 00:14:39.451 "trtype": "TCP" 00:14:39.451 }, 00:14:39.451 "peer_address": { 00:14:39.451 "adrfam": "IPv4", 00:14:39.451 "traddr": "10.0.0.1", 00:14:39.451 "trsvcid": "46112", 00:14:39.451 "trtype": "TCP" 00:14:39.451 }, 00:14:39.451 "qid": 0, 00:14:39.451 "state": "enabled", 00:14:39.451 "thread": "nvmf_tgt_poll_group_000" 00:14:39.451 } 00:14:39.451 ]' 00:14:39.451 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.451 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.451 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.451 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:39.451 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.451 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.451 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.451 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.709 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:14:40.644 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.644 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:40.644 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.644 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.644 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.644 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.644 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:40.644 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:40.644 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:40.644 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.644 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:40.644 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:40.644 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:40.644 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.644 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:14:40.644 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.645 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.645 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.645 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:40.645 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:41.583 00:14:41.583 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.583 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.583 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.583 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.583 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.583 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.583 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.583 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.583 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.583 { 00:14:41.583 "auth": { 00:14:41.583 "dhgroup": "ffdhe6144", 00:14:41.583 "digest": "sha256", 00:14:41.583 "state": "completed" 00:14:41.583 }, 00:14:41.583 "cntlid": 39, 00:14:41.583 "listen_address": { 00:14:41.583 "adrfam": "IPv4", 00:14:41.583 "traddr": "10.0.0.2", 00:14:41.583 "trsvcid": "4420", 00:14:41.583 "trtype": "TCP" 00:14:41.583 }, 00:14:41.583 "peer_address": { 00:14:41.583 "adrfam": "IPv4", 00:14:41.583 "traddr": "10.0.0.1", 00:14:41.583 "trsvcid": "46150", 00:14:41.583 "trtype": "TCP" 00:14:41.583 }, 00:14:41.583 "qid": 0, 00:14:41.583 "state": "enabled", 00:14:41.583 "thread": "nvmf_tgt_poll_group_000" 00:14:41.583 } 00:14:41.583 ]' 00:14:41.583 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.841 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.841 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.841 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:41.841 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.841 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.841 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.841 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.100 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:14:43.035 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.035 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:43.035 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.035 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.035 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.035 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:43.035 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.035 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:43.035 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:43.293 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:43.293 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.293 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:43.293 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:43.293 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:43.293 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.293 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.293 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.293 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.293 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.293 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.293 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.859 00:14:43.859 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.859 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:43.859 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.174 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.174 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.174 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.174 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.174 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.174 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.174 { 00:14:44.174 "auth": { 00:14:44.175 "dhgroup": "ffdhe8192", 00:14:44.175 "digest": "sha256", 00:14:44.175 "state": "completed" 00:14:44.175 }, 00:14:44.175 "cntlid": 41, 00:14:44.175 "listen_address": { 00:14:44.175 "adrfam": "IPv4", 00:14:44.175 "traddr": "10.0.0.2", 00:14:44.175 "trsvcid": "4420", 00:14:44.175 "trtype": "TCP" 00:14:44.175 }, 00:14:44.175 "peer_address": { 00:14:44.175 "adrfam": "IPv4", 00:14:44.175 "traddr": "10.0.0.1", 00:14:44.175 "trsvcid": "46178", 00:14:44.175 "trtype": "TCP" 00:14:44.175 }, 00:14:44.175 "qid": 0, 00:14:44.175 "state": "enabled", 00:14:44.175 "thread": "nvmf_tgt_poll_group_000" 00:14:44.175 } 00:14:44.175 ]' 00:14:44.175 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.175 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.175 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.175 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:44.175 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.432 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.433 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.433 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.691 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:14:45.256 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.256 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:45.256 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.256 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.256 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.256 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.256 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:45.256 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:45.515 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:45.515 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.515 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:45.515 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:45.515 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:45.515 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.515 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.515 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.515 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.515 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.515 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.515 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.449 00:14:46.449 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.449 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.449 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.711 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.711 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.711 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.711 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.711 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.711 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.711 { 00:14:46.711 "auth": { 00:14:46.711 "dhgroup": "ffdhe8192", 00:14:46.711 "digest": "sha256", 00:14:46.711 "state": "completed" 00:14:46.711 }, 00:14:46.711 "cntlid": 43, 00:14:46.711 "listen_address": { 00:14:46.711 "adrfam": "IPv4", 00:14:46.711 "traddr": "10.0.0.2", 00:14:46.711 "trsvcid": "4420", 00:14:46.711 "trtype": "TCP" 00:14:46.711 }, 00:14:46.711 "peer_address": { 00:14:46.711 "adrfam": "IPv4", 00:14:46.711 "traddr": "10.0.0.1", 00:14:46.711 "trsvcid": "46212", 00:14:46.711 "trtype": "TCP" 00:14:46.711 }, 00:14:46.711 "qid": 0, 00:14:46.711 "state": "enabled", 00:14:46.711 "thread": "nvmf_tgt_poll_group_000" 00:14:46.711 } 00:14:46.711 ]' 00:14:46.711 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.711 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.711 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.711 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:46.711 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.971 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.971 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.971 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.971 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:14:47.904 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.904 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:47.904 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.904 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.904 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.904 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.904 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:47.904 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:48.162 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:48.162 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.162 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:48.162 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:48.162 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:48.162 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.162 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.162 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.162 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.162 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.162 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.162 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.726 00:14:48.726 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.726 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.726 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.983 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.983 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.983 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.983 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.983 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.983 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.983 { 00:14:48.983 "auth": { 00:14:48.983 "dhgroup": "ffdhe8192", 00:14:48.983 "digest": "sha256", 00:14:48.983 "state": "completed" 00:14:48.983 }, 00:14:48.983 "cntlid": 45, 00:14:48.983 "listen_address": { 00:14:48.983 "adrfam": "IPv4", 00:14:48.983 "traddr": "10.0.0.2", 00:14:48.983 "trsvcid": "4420", 00:14:48.983 "trtype": "TCP" 00:14:48.983 }, 00:14:48.983 "peer_address": { 00:14:48.983 "adrfam": "IPv4", 00:14:48.983 "traddr": "10.0.0.1", 00:14:48.983 "trsvcid": "48864", 00:14:48.983 "trtype": "TCP" 00:14:48.983 }, 00:14:48.983 "qid": 0, 00:14:48.983 "state": "enabled", 00:14:48.983 "thread": "nvmf_tgt_poll_group_000" 00:14:48.983 } 00:14:48.983 ]' 00:14:48.983 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.983 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.983 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.240 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:49.240 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.240 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.240 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.240 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.498 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:14:50.063 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.063 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:50.063 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.063 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.063 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.063 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:50.063 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:50.063 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:50.320 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:50.320 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.320 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:50.320 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:50.320 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:50.320 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.320 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:14:50.320 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.320 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.577 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.577 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:50.577 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:51.159 00:14:51.159 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.159 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.159 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.415 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.415 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.415 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.415 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.415 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.415 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.415 { 00:14:51.415 "auth": { 00:14:51.415 "dhgroup": "ffdhe8192", 00:14:51.415 "digest": "sha256", 00:14:51.415 "state": "completed" 00:14:51.415 }, 00:14:51.415 "cntlid": 47, 00:14:51.415 "listen_address": { 00:14:51.415 "adrfam": "IPv4", 00:14:51.415 "traddr": "10.0.0.2", 00:14:51.415 "trsvcid": "4420", 00:14:51.415 "trtype": "TCP" 00:14:51.415 }, 00:14:51.415 "peer_address": { 00:14:51.415 "adrfam": "IPv4", 00:14:51.415 "traddr": "10.0.0.1", 00:14:51.415 "trsvcid": "48902", 00:14:51.415 "trtype": "TCP" 00:14:51.415 }, 00:14:51.415 "qid": 0, 00:14:51.415 "state": "enabled", 00:14:51.415 "thread": "nvmf_tgt_poll_group_000" 00:14:51.415 } 00:14:51.415 ]' 00:14:51.415 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.415 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.415 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.415 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:51.416 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.416 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.416 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.416 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.672 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:14:52.603 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.603 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:52.603 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.603 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.603 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.603 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:52.603 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:52.603 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.603 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:52.603 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:52.861 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:52.861 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.861 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:52.861 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:52.861 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:52.861 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.861 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.861 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.861 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.861 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.861 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.861 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.118 00:14:53.118 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.118 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.118 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.376 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.376 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.376 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.376 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.376 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.376 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.376 { 00:14:53.376 "auth": { 00:14:53.376 "dhgroup": "null", 00:14:53.376 "digest": "sha384", 00:14:53.376 "state": "completed" 00:14:53.376 }, 00:14:53.376 "cntlid": 49, 00:14:53.376 "listen_address": { 00:14:53.376 "adrfam": "IPv4", 00:14:53.376 "traddr": "10.0.0.2", 00:14:53.376 "trsvcid": "4420", 00:14:53.376 "trtype": "TCP" 00:14:53.376 }, 00:14:53.376 "peer_address": { 00:14:53.376 "adrfam": "IPv4", 00:14:53.376 "traddr": "10.0.0.1", 00:14:53.376 "trsvcid": "48928", 00:14:53.376 "trtype": "TCP" 00:14:53.376 }, 00:14:53.376 "qid": 0, 00:14:53.376 "state": "enabled", 00:14:53.376 "thread": "nvmf_tgt_poll_group_000" 00:14:53.376 } 00:14:53.376 ]' 00:14:53.376 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:53.376 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:53.376 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.634 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:53.634 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.634 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.634 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.634 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.892 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:14:54.458 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.717 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:54.717 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.717 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.717 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.717 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.717 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:54.717 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:54.975 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:54.975 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.975 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:54.975 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:54.975 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:54.975 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.975 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.975 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.975 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.975 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.975 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.975 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.233 00:14:55.233 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.233 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:55.233 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.799 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.799 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.799 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.799 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.799 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.799 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.799 { 00:14:55.799 "auth": { 00:14:55.799 "dhgroup": "null", 00:14:55.799 "digest": "sha384", 00:14:55.799 "state": "completed" 00:14:55.799 }, 00:14:55.799 "cntlid": 51, 00:14:55.799 "listen_address": { 00:14:55.799 "adrfam": "IPv4", 00:14:55.799 "traddr": "10.0.0.2", 00:14:55.799 "trsvcid": "4420", 00:14:55.799 "trtype": "TCP" 00:14:55.799 }, 00:14:55.799 "peer_address": { 00:14:55.799 "adrfam": "IPv4", 00:14:55.799 "traddr": "10.0.0.1", 00:14:55.799 "trsvcid": "48960", 00:14:55.799 "trtype": "TCP" 00:14:55.799 }, 00:14:55.799 "qid": 0, 00:14:55.799 "state": "enabled", 00:14:55.799 "thread": "nvmf_tgt_poll_group_000" 00:14:55.799 } 00:14:55.799 ]' 00:14:55.799 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.799 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:55.799 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.800 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:55.800 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.800 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.800 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.800 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.058 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:14:56.995 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.995 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:56.995 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.995 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.995 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.995 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:56.995 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:56.995 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:56.995 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:56.995 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.995 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:56.995 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:56.995 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:56.995 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.995 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.995 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.995 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.995 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.995 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.995 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.561 00:14:57.561 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.561 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.561 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.819 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.819 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.819 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.819 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.819 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.819 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.819 { 00:14:57.819 "auth": { 00:14:57.819 "dhgroup": "null", 00:14:57.819 "digest": "sha384", 00:14:57.819 "state": "completed" 00:14:57.819 }, 00:14:57.819 "cntlid": 53, 00:14:57.819 "listen_address": { 00:14:57.819 "adrfam": "IPv4", 00:14:57.819 "traddr": "10.0.0.2", 00:14:57.819 "trsvcid": "4420", 00:14:57.819 "trtype": "TCP" 00:14:57.819 }, 00:14:57.819 "peer_address": { 00:14:57.819 "adrfam": "IPv4", 00:14:57.819 "traddr": "10.0.0.1", 00:14:57.819 "trsvcid": "49002", 00:14:57.819 "trtype": "TCP" 00:14:57.819 }, 00:14:57.819 "qid": 0, 00:14:57.819 "state": "enabled", 00:14:57.819 "thread": "nvmf_tgt_poll_group_000" 00:14:57.819 } 00:14:57.819 ]' 00:14:57.819 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:57.819 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:57.819 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:57.819 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:57.819 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:57.819 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.819 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.819 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.077 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:14:59.018 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.018 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:14:59.018 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.018 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.018 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.018 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.018 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:59.018 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:59.291 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:59.291 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.291 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:59.291 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:59.291 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:59.291 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.291 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:14:59.291 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.291 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.291 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.291 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:59.291 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:59.549 00:14:59.807 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.807 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.807 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.065 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.065 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.065 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.065 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.065 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.065 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.065 { 00:15:00.065 "auth": { 00:15:00.065 "dhgroup": "null", 00:15:00.065 "digest": "sha384", 00:15:00.065 "state": "completed" 00:15:00.065 }, 00:15:00.065 "cntlid": 55, 00:15:00.065 "listen_address": { 00:15:00.065 "adrfam": "IPv4", 00:15:00.065 "traddr": "10.0.0.2", 00:15:00.065 "trsvcid": "4420", 00:15:00.065 "trtype": "TCP" 00:15:00.065 }, 00:15:00.065 "peer_address": { 00:15:00.065 "adrfam": "IPv4", 00:15:00.065 "traddr": "10.0.0.1", 00:15:00.065 "trsvcid": "55140", 00:15:00.065 "trtype": "TCP" 00:15:00.065 }, 00:15:00.065 "qid": 0, 00:15:00.065 "state": "enabled", 00:15:00.065 "thread": "nvmf_tgt_poll_group_000" 00:15:00.065 } 00:15:00.065 ]' 00:15:00.065 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.065 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:00.065 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.065 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:00.065 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.065 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.065 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.065 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.323 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:15:01.256 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.256 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:01.256 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.256 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.256 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.256 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.256 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.256 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:01.256 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:01.515 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:01.515 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.515 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:01.515 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:01.515 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:01.515 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.515 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.515 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.515 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.515 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.515 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.515 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.773 00:15:01.773 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.773 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.773 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.338 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.338 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.338 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.338 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.338 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.338 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.338 { 00:15:02.338 "auth": { 00:15:02.338 "dhgroup": "ffdhe2048", 00:15:02.338 "digest": "sha384", 00:15:02.338 "state": "completed" 00:15:02.338 }, 00:15:02.338 "cntlid": 57, 00:15:02.338 "listen_address": { 00:15:02.338 "adrfam": "IPv4", 00:15:02.338 "traddr": "10.0.0.2", 00:15:02.338 "trsvcid": "4420", 00:15:02.338 "trtype": "TCP" 00:15:02.338 }, 00:15:02.338 "peer_address": { 00:15:02.338 "adrfam": "IPv4", 00:15:02.338 "traddr": "10.0.0.1", 00:15:02.338 "trsvcid": "55168", 00:15:02.338 "trtype": "TCP" 00:15:02.338 }, 00:15:02.338 "qid": 0, 00:15:02.338 "state": "enabled", 00:15:02.338 "thread": "nvmf_tgt_poll_group_000" 00:15:02.338 } 00:15:02.338 ]' 00:15:02.338 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.338 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.338 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.338 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:02.338 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.338 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.338 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.338 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.596 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:15:03.529 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.530 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.822 00:15:03.822 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.822 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.822 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.387 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.387 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.387 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.387 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.387 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.387 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.387 { 00:15:04.387 "auth": { 00:15:04.387 "dhgroup": "ffdhe2048", 00:15:04.387 "digest": "sha384", 00:15:04.387 "state": "completed" 00:15:04.387 }, 00:15:04.387 "cntlid": 59, 00:15:04.387 "listen_address": { 00:15:04.387 "adrfam": "IPv4", 00:15:04.387 "traddr": "10.0.0.2", 00:15:04.387 "trsvcid": "4420", 00:15:04.387 "trtype": "TCP" 00:15:04.387 }, 00:15:04.387 "peer_address": { 00:15:04.388 "adrfam": "IPv4", 00:15:04.388 "traddr": "10.0.0.1", 00:15:04.388 "trsvcid": "55190", 00:15:04.388 "trtype": "TCP" 00:15:04.388 }, 00:15:04.388 "qid": 0, 00:15:04.388 "state": "enabled", 00:15:04.388 "thread": "nvmf_tgt_poll_group_000" 00:15:04.388 } 00:15:04.388 ]' 00:15:04.388 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.388 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.388 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.388 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:04.388 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.388 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.388 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.388 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.646 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:15:05.581 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.581 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:05.581 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.581 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.581 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.581 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.581 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:05.581 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:05.839 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:05.839 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.839 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:05.839 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:05.839 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:05.839 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.839 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.839 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.839 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.839 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.839 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.839 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.097 00:15:06.097 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.097 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.097 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.664 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.665 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.665 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.665 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.665 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.665 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.665 { 00:15:06.665 "auth": { 00:15:06.665 "dhgroup": "ffdhe2048", 00:15:06.665 "digest": "sha384", 00:15:06.665 "state": "completed" 00:15:06.665 }, 00:15:06.665 "cntlid": 61, 00:15:06.665 "listen_address": { 00:15:06.665 "adrfam": "IPv4", 00:15:06.665 "traddr": "10.0.0.2", 00:15:06.665 "trsvcid": "4420", 00:15:06.665 "trtype": "TCP" 00:15:06.665 }, 00:15:06.665 "peer_address": { 00:15:06.665 "adrfam": "IPv4", 00:15:06.665 "traddr": "10.0.0.1", 00:15:06.665 "trsvcid": "55220", 00:15:06.665 "trtype": "TCP" 00:15:06.665 }, 00:15:06.665 "qid": 0, 00:15:06.665 "state": "enabled", 00:15:06.665 "thread": "nvmf_tgt_poll_group_000" 00:15:06.665 } 00:15:06.665 ]' 00:15:06.665 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.665 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.665 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.665 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:06.665 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.665 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.665 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.665 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.923 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:15:07.857 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.857 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:07.857 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.857 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.857 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.857 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.857 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:07.857 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:08.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:08.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:08.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:08.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:08.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:15:08.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:08.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:08.372 00:15:08.373 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:08.373 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:08.373 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.953 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.953 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.953 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.953 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.953 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.953 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:08.953 { 00:15:08.953 "auth": { 00:15:08.953 "dhgroup": "ffdhe2048", 00:15:08.953 "digest": "sha384", 00:15:08.953 "state": "completed" 00:15:08.953 }, 00:15:08.953 "cntlid": 63, 00:15:08.953 "listen_address": { 00:15:08.953 "adrfam": "IPv4", 00:15:08.953 "traddr": "10.0.0.2", 00:15:08.953 "trsvcid": "4420", 00:15:08.953 "trtype": "TCP" 00:15:08.953 }, 00:15:08.953 "peer_address": { 00:15:08.953 "adrfam": "IPv4", 00:15:08.953 "traddr": "10.0.0.1", 00:15:08.953 "trsvcid": "40722", 00:15:08.953 "trtype": "TCP" 00:15:08.953 }, 00:15:08.953 "qid": 0, 00:15:08.953 "state": "enabled", 00:15:08.953 "thread": "nvmf_tgt_poll_group_000" 00:15:08.953 } 00:15:08.953 ]' 00:15:08.953 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.953 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.953 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.953 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:08.953 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.953 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.953 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.953 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.211 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:15:10.145 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.145 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:10.145 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.145 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.145 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.145 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:10.145 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.145 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:10.145 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:10.145 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:10.145 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.145 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:10.145 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:10.145 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:10.146 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.146 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.146 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.146 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.146 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.146 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.146 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.404 00:15:10.661 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.661 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.662 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.919 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.919 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.919 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.919 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.919 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.919 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.919 { 00:15:10.919 "auth": { 00:15:10.919 "dhgroup": "ffdhe3072", 00:15:10.919 "digest": "sha384", 00:15:10.919 "state": "completed" 00:15:10.919 }, 00:15:10.919 "cntlid": 65, 00:15:10.919 "listen_address": { 00:15:10.919 "adrfam": "IPv4", 00:15:10.919 "traddr": "10.0.0.2", 00:15:10.919 "trsvcid": "4420", 00:15:10.919 "trtype": "TCP" 00:15:10.919 }, 00:15:10.919 "peer_address": { 00:15:10.919 "adrfam": "IPv4", 00:15:10.919 "traddr": "10.0.0.1", 00:15:10.919 "trsvcid": "40736", 00:15:10.919 "trtype": "TCP" 00:15:10.919 }, 00:15:10.919 "qid": 0, 00:15:10.919 "state": "enabled", 00:15:10.919 "thread": "nvmf_tgt_poll_group_000" 00:15:10.919 } 00:15:10.919 ]' 00:15:10.919 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.919 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.919 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.919 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:10.919 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.919 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.919 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.919 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.176 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:15:12.111 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.111 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:12.111 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.111 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.111 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.111 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.111 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:12.111 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:12.111 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:12.111 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.111 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:12.111 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:12.111 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:12.111 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.111 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.111 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.111 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.111 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.111 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.111 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.677 00:15:12.677 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.677 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.677 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.935 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.935 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.935 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.935 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.935 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.935 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.935 { 00:15:12.935 "auth": { 00:15:12.935 "dhgroup": "ffdhe3072", 00:15:12.935 "digest": "sha384", 00:15:12.935 "state": "completed" 00:15:12.935 }, 00:15:12.935 "cntlid": 67, 00:15:12.935 "listen_address": { 00:15:12.935 "adrfam": "IPv4", 00:15:12.935 "traddr": "10.0.0.2", 00:15:12.935 "trsvcid": "4420", 00:15:12.935 "trtype": "TCP" 00:15:12.935 }, 00:15:12.935 "peer_address": { 00:15:12.935 "adrfam": "IPv4", 00:15:12.935 "traddr": "10.0.0.1", 00:15:12.935 "trsvcid": "40776", 00:15:12.935 "trtype": "TCP" 00:15:12.935 }, 00:15:12.935 "qid": 0, 00:15:12.935 "state": "enabled", 00:15:12.935 "thread": "nvmf_tgt_poll_group_000" 00:15:12.935 } 00:15:12.935 ]' 00:15:12.935 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.194 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.194 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.194 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:13.194 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.194 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.194 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.194 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.453 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:15:14.019 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.278 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:14.278 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.278 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.278 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.278 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.278 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:14.278 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:14.536 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:14.536 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.536 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:14.536 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:14.536 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:14.536 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.536 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.536 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.536 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.536 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.536 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.536 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.803 00:15:14.803 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.803 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.803 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.076 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.076 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.076 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.076 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.076 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.076 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:15.076 { 00:15:15.076 "auth": { 00:15:15.076 "dhgroup": "ffdhe3072", 00:15:15.076 "digest": "sha384", 00:15:15.076 "state": "completed" 00:15:15.076 }, 00:15:15.076 "cntlid": 69, 00:15:15.076 "listen_address": { 00:15:15.076 "adrfam": "IPv4", 00:15:15.076 "traddr": "10.0.0.2", 00:15:15.076 "trsvcid": "4420", 00:15:15.076 "trtype": "TCP" 00:15:15.076 }, 00:15:15.076 "peer_address": { 00:15:15.076 "adrfam": "IPv4", 00:15:15.076 "traddr": "10.0.0.1", 00:15:15.076 "trsvcid": "40800", 00:15:15.076 "trtype": "TCP" 00:15:15.076 }, 00:15:15.076 "qid": 0, 00:15:15.076 "state": "enabled", 00:15:15.076 "thread": "nvmf_tgt_poll_group_000" 00:15:15.076 } 00:15:15.076 ]' 00:15:15.076 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:15.335 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.335 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.335 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:15.335 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.335 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.335 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.335 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.593 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:15:16.175 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.175 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:16.175 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.175 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.175 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.175 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:16.175 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:16.175 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:16.740 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:16.740 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.740 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:16.740 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:16.740 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:16.740 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.740 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:15:16.740 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.740 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.740 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.740 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.740 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.998 00:15:16.998 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.998 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.998 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.256 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.256 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.256 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.256 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.256 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.256 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.256 { 00:15:17.256 "auth": { 00:15:17.256 "dhgroup": "ffdhe3072", 00:15:17.256 "digest": "sha384", 00:15:17.256 "state": "completed" 00:15:17.256 }, 00:15:17.256 "cntlid": 71, 00:15:17.256 "listen_address": { 00:15:17.256 "adrfam": "IPv4", 00:15:17.256 "traddr": "10.0.0.2", 00:15:17.256 "trsvcid": "4420", 00:15:17.256 "trtype": "TCP" 00:15:17.256 }, 00:15:17.256 "peer_address": { 00:15:17.256 "adrfam": "IPv4", 00:15:17.256 "traddr": "10.0.0.1", 00:15:17.256 "trsvcid": "40816", 00:15:17.256 "trtype": "TCP" 00:15:17.256 }, 00:15:17.256 "qid": 0, 00:15:17.256 "state": "enabled", 00:15:17.256 "thread": "nvmf_tgt_poll_group_000" 00:15:17.256 } 00:15:17.256 ]' 00:15:17.256 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.256 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.256 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.256 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:17.256 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.513 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.513 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.513 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.770 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:15:18.702 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.702 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:18.702 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.702 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.702 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.702 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:18.702 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.702 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:18.702 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:18.960 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:18.960 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.960 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:18.960 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:18.960 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:18.960 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.960 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.960 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.960 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.960 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.960 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.960 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.217 00:15:19.217 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:19.217 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.217 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.474 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.474 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.474 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.474 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.474 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.474 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:19.474 { 00:15:19.474 "auth": { 00:15:19.474 "dhgroup": "ffdhe4096", 00:15:19.474 "digest": "sha384", 00:15:19.474 "state": "completed" 00:15:19.474 }, 00:15:19.474 "cntlid": 73, 00:15:19.474 "listen_address": { 00:15:19.474 "adrfam": "IPv4", 00:15:19.474 "traddr": "10.0.0.2", 00:15:19.474 "trsvcid": "4420", 00:15:19.474 "trtype": "TCP" 00:15:19.474 }, 00:15:19.474 "peer_address": { 00:15:19.474 "adrfam": "IPv4", 00:15:19.474 "traddr": "10.0.0.1", 00:15:19.474 "trsvcid": "39352", 00:15:19.474 "trtype": "TCP" 00:15:19.474 }, 00:15:19.474 "qid": 0, 00:15:19.474 "state": "enabled", 00:15:19.474 "thread": "nvmf_tgt_poll_group_000" 00:15:19.474 } 00:15:19.474 ]' 00:15:19.474 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:19.731 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:19.731 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:19.731 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:19.731 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:19.731 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.731 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.731 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.988 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:15:20.947 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.947 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:20.947 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.947 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.947 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.947 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.947 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:20.947 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:21.205 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:21.205 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:21.205 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:21.205 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:21.205 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:21.205 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.205 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.205 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.205 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.205 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.205 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.205 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.463 00:15:21.463 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:21.463 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.463 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:21.720 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.720 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.720 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.720 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.978 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.978 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.978 { 00:15:21.978 "auth": { 00:15:21.978 "dhgroup": "ffdhe4096", 00:15:21.978 "digest": "sha384", 00:15:21.978 "state": "completed" 00:15:21.978 }, 00:15:21.978 "cntlid": 75, 00:15:21.978 "listen_address": { 00:15:21.978 "adrfam": "IPv4", 00:15:21.978 "traddr": "10.0.0.2", 00:15:21.978 "trsvcid": "4420", 00:15:21.978 "trtype": "TCP" 00:15:21.978 }, 00:15:21.978 "peer_address": { 00:15:21.978 "adrfam": "IPv4", 00:15:21.978 "traddr": "10.0.0.1", 00:15:21.978 "trsvcid": "39378", 00:15:21.978 "trtype": "TCP" 00:15:21.978 }, 00:15:21.978 "qid": 0, 00:15:21.978 "state": "enabled", 00:15:21.978 "thread": "nvmf_tgt_poll_group_000" 00:15:21.978 } 00:15:21.978 ]' 00:15:21.978 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.978 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.978 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.978 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:21.978 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.978 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.978 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.978 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.237 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:15:23.172 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.173 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:23.173 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.173 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.173 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.173 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.173 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:23.173 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:23.431 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:23.431 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.431 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:23.431 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:23.431 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:23.431 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.431 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.431 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.431 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.431 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.431 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.431 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.690 00:15:23.690 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.690 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.690 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.949 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.949 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.949 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.949 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.949 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.949 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.949 { 00:15:23.949 "auth": { 00:15:23.949 "dhgroup": "ffdhe4096", 00:15:23.949 "digest": "sha384", 00:15:23.949 "state": "completed" 00:15:23.949 }, 00:15:23.949 "cntlid": 77, 00:15:23.949 "listen_address": { 00:15:23.949 "adrfam": "IPv4", 00:15:23.949 "traddr": "10.0.0.2", 00:15:23.949 "trsvcid": "4420", 00:15:23.949 "trtype": "TCP" 00:15:23.949 }, 00:15:23.949 "peer_address": { 00:15:23.949 "adrfam": "IPv4", 00:15:23.949 "traddr": "10.0.0.1", 00:15:23.949 "trsvcid": "39410", 00:15:23.949 "trtype": "TCP" 00:15:23.949 }, 00:15:23.949 "qid": 0, 00:15:23.949 "state": "enabled", 00:15:23.949 "thread": "nvmf_tgt_poll_group_000" 00:15:23.949 } 00:15:23.949 ]' 00:15:23.949 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.208 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.208 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.208 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:24.208 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.208 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.208 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.208 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.466 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:15:25.401 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.401 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:25.401 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.401 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.401 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.401 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.401 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:25.401 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:25.659 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:25.659 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.659 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:25.659 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:25.659 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:25.659 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.659 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:15:25.659 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.659 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.659 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.659 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.659 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.917 00:15:26.176 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.176 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.176 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.435 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.435 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.435 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.435 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.435 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.435 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.435 { 00:15:26.435 "auth": { 00:15:26.435 "dhgroup": "ffdhe4096", 00:15:26.435 "digest": "sha384", 00:15:26.435 "state": "completed" 00:15:26.435 }, 00:15:26.435 "cntlid": 79, 00:15:26.435 "listen_address": { 00:15:26.435 "adrfam": "IPv4", 00:15:26.435 "traddr": "10.0.0.2", 00:15:26.435 "trsvcid": "4420", 00:15:26.435 "trtype": "TCP" 00:15:26.435 }, 00:15:26.435 "peer_address": { 00:15:26.435 "adrfam": "IPv4", 00:15:26.435 "traddr": "10.0.0.1", 00:15:26.435 "trsvcid": "39454", 00:15:26.435 "trtype": "TCP" 00:15:26.435 }, 00:15:26.435 "qid": 0, 00:15:26.435 "state": "enabled", 00:15:26.435 "thread": "nvmf_tgt_poll_group_000" 00:15:26.435 } 00:15:26.435 ]' 00:15:26.435 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.435 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.435 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.435 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.435 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.435 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.435 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.435 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.693 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:15:27.627 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.627 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:27.627 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.627 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.627 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.628 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:27.628 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.628 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:27.628 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:27.887 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:27.887 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.887 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:27.887 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:27.887 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:27.887 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.887 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.887 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.887 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.887 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.887 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.887 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.454 00:15:28.454 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.454 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.454 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.713 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.713 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.713 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.713 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.713 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.713 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.713 { 00:15:28.713 "auth": { 00:15:28.713 "dhgroup": "ffdhe6144", 00:15:28.713 "digest": "sha384", 00:15:28.713 "state": "completed" 00:15:28.713 }, 00:15:28.713 "cntlid": 81, 00:15:28.713 "listen_address": { 00:15:28.713 "adrfam": "IPv4", 00:15:28.713 "traddr": "10.0.0.2", 00:15:28.713 "trsvcid": "4420", 00:15:28.713 "trtype": "TCP" 00:15:28.713 }, 00:15:28.713 "peer_address": { 00:15:28.713 "adrfam": "IPv4", 00:15:28.713 "traddr": "10.0.0.1", 00:15:28.713 "trsvcid": "39482", 00:15:28.713 "trtype": "TCP" 00:15:28.713 }, 00:15:28.713 "qid": 0, 00:15:28.713 "state": "enabled", 00:15:28.713 "thread": "nvmf_tgt_poll_group_000" 00:15:28.713 } 00:15:28.713 ]' 00:15:28.713 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.713 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.713 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.713 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:28.713 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.713 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.713 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.713 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.280 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:15:29.845 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.845 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:29.845 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.845 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.845 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.845 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.845 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:29.845 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:30.103 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:30.103 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.103 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:30.103 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:30.103 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:30.103 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.103 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.103 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.103 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.103 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.103 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.103 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.669 00:15:30.669 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.669 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.669 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.928 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.928 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.928 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.928 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.928 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.928 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.928 { 00:15:30.928 "auth": { 00:15:30.928 "dhgroup": "ffdhe6144", 00:15:30.928 "digest": "sha384", 00:15:30.928 "state": "completed" 00:15:30.928 }, 00:15:30.928 "cntlid": 83, 00:15:30.928 "listen_address": { 00:15:30.928 "adrfam": "IPv4", 00:15:30.928 "traddr": "10.0.0.2", 00:15:30.928 "trsvcid": "4420", 00:15:30.928 "trtype": "TCP" 00:15:30.928 }, 00:15:30.928 "peer_address": { 00:15:30.928 "adrfam": "IPv4", 00:15:30.928 "traddr": "10.0.0.1", 00:15:30.928 "trsvcid": "43446", 00:15:30.928 "trtype": "TCP" 00:15:30.928 }, 00:15:30.928 "qid": 0, 00:15:30.928 "state": "enabled", 00:15:30.928 "thread": "nvmf_tgt_poll_group_000" 00:15:30.928 } 00:15:30.928 ]' 00:15:30.928 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.928 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.928 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.928 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:30.928 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.186 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.186 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.186 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.445 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:15:32.012 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.012 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:32.012 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.012 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.012 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.012 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.012 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:32.012 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:32.579 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:32.579 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.579 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:32.579 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:32.579 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:32.579 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.579 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.579 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.579 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.579 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.579 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.579 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.837 00:15:32.837 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.837 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.837 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.405 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.405 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.405 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.405 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.405 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.405 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.405 { 00:15:33.405 "auth": { 00:15:33.405 "dhgroup": "ffdhe6144", 00:15:33.405 "digest": "sha384", 00:15:33.405 "state": "completed" 00:15:33.405 }, 00:15:33.405 "cntlid": 85, 00:15:33.405 "listen_address": { 00:15:33.405 "adrfam": "IPv4", 00:15:33.405 "traddr": "10.0.0.2", 00:15:33.405 "trsvcid": "4420", 00:15:33.405 "trtype": "TCP" 00:15:33.405 }, 00:15:33.405 "peer_address": { 00:15:33.405 "adrfam": "IPv4", 00:15:33.405 "traddr": "10.0.0.1", 00:15:33.405 "trsvcid": "43466", 00:15:33.405 "trtype": "TCP" 00:15:33.405 }, 00:15:33.405 "qid": 0, 00:15:33.405 "state": "enabled", 00:15:33.405 "thread": "nvmf_tgt_poll_group_000" 00:15:33.405 } 00:15:33.405 ]' 00:15:33.405 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.405 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.405 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.405 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:33.405 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.405 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.405 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.405 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.663 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:15:34.230 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.230 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:34.230 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.230 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.488 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.488 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.488 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.488 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.746 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:34.746 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.746 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:34.746 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:34.746 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:34.746 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.746 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:15:34.746 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.746 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.746 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.746 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:34.746 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:35.004 00:15:35.004 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.004 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.004 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.263 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.263 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.263 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.263 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.263 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.263 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.263 { 00:15:35.263 "auth": { 00:15:35.263 "dhgroup": "ffdhe6144", 00:15:35.263 "digest": "sha384", 00:15:35.263 "state": "completed" 00:15:35.263 }, 00:15:35.263 "cntlid": 87, 00:15:35.263 "listen_address": { 00:15:35.263 "adrfam": "IPv4", 00:15:35.263 "traddr": "10.0.0.2", 00:15:35.263 "trsvcid": "4420", 00:15:35.263 "trtype": "TCP" 00:15:35.263 }, 00:15:35.263 "peer_address": { 00:15:35.263 "adrfam": "IPv4", 00:15:35.263 "traddr": "10.0.0.1", 00:15:35.263 "trsvcid": "43488", 00:15:35.263 "trtype": "TCP" 00:15:35.263 }, 00:15:35.263 "qid": 0, 00:15:35.263 "state": "enabled", 00:15:35.263 "thread": "nvmf_tgt_poll_group_000" 00:15:35.263 } 00:15:35.263 ]' 00:15:35.263 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.521 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.521 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.521 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.521 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.521 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.521 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.521 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.780 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:15:36.347 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.347 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:36.347 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.347 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.347 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.347 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.347 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.347 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:36.347 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:36.606 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:36.606 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.606 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:36.606 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:36.606 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:36.606 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.606 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.606 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.606 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.864 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.864 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.864 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.431 00:15:37.431 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.431 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.431 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.691 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.691 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.691 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.691 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.691 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.691 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.691 { 00:15:37.691 "auth": { 00:15:37.691 "dhgroup": "ffdhe8192", 00:15:37.691 "digest": "sha384", 00:15:37.691 "state": "completed" 00:15:37.691 }, 00:15:37.691 "cntlid": 89, 00:15:37.691 "listen_address": { 00:15:37.691 "adrfam": "IPv4", 00:15:37.691 "traddr": "10.0.0.2", 00:15:37.691 "trsvcid": "4420", 00:15:37.691 "trtype": "TCP" 00:15:37.691 }, 00:15:37.691 "peer_address": { 00:15:37.691 "adrfam": "IPv4", 00:15:37.691 "traddr": "10.0.0.1", 00:15:37.691 "trsvcid": "43504", 00:15:37.691 "trtype": "TCP" 00:15:37.691 }, 00:15:37.691 "qid": 0, 00:15:37.691 "state": "enabled", 00:15:37.691 "thread": "nvmf_tgt_poll_group_000" 00:15:37.691 } 00:15:37.691 ]' 00:15:37.691 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.691 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.691 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.950 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:37.950 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.950 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.950 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.950 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.208 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:15:39.143 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.143 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:39.143 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.143 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.144 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.144 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.144 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:39.144 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:39.402 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:39.402 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.402 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:39.402 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:39.402 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:39.402 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.402 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.402 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.402 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.402 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.402 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.402 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.967 00:15:39.967 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.967 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.967 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.532 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.532 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.532 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.532 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.532 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.532 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:40.532 { 00:15:40.532 "auth": { 00:15:40.532 "dhgroup": "ffdhe8192", 00:15:40.532 "digest": "sha384", 00:15:40.532 "state": "completed" 00:15:40.532 }, 00:15:40.532 "cntlid": 91, 00:15:40.532 "listen_address": { 00:15:40.532 "adrfam": "IPv4", 00:15:40.532 "traddr": "10.0.0.2", 00:15:40.532 "trsvcid": "4420", 00:15:40.532 "trtype": "TCP" 00:15:40.532 }, 00:15:40.532 "peer_address": { 00:15:40.532 "adrfam": "IPv4", 00:15:40.532 "traddr": "10.0.0.1", 00:15:40.532 "trsvcid": "53996", 00:15:40.532 "trtype": "TCP" 00:15:40.532 }, 00:15:40.532 "qid": 0, 00:15:40.532 "state": "enabled", 00:15:40.532 "thread": "nvmf_tgt_poll_group_000" 00:15:40.532 } 00:15:40.532 ]' 00:15:40.532 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:40.532 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.532 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:40.532 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:40.532 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.532 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.532 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.532 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.789 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:15:41.722 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.722 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:41.722 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.722 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.722 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.722 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.722 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.722 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.981 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:41.981 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.981 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:41.981 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:41.981 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:41.981 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.981 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.981 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.981 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.981 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.981 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.981 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.915 00:15:42.915 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.915 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.915 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.915 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.915 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.915 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.915 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.915 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.915 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.915 { 00:15:42.915 "auth": { 00:15:42.915 "dhgroup": "ffdhe8192", 00:15:42.915 "digest": "sha384", 00:15:42.915 "state": "completed" 00:15:42.915 }, 00:15:42.915 "cntlid": 93, 00:15:42.915 "listen_address": { 00:15:42.915 "adrfam": "IPv4", 00:15:42.915 "traddr": "10.0.0.2", 00:15:42.915 "trsvcid": "4420", 00:15:42.915 "trtype": "TCP" 00:15:42.915 }, 00:15:42.915 "peer_address": { 00:15:42.915 "adrfam": "IPv4", 00:15:42.915 "traddr": "10.0.0.1", 00:15:42.915 "trsvcid": "54020", 00:15:42.915 "trtype": "TCP" 00:15:42.915 }, 00:15:42.915 "qid": 0, 00:15:42.915 "state": "enabled", 00:15:42.915 "thread": "nvmf_tgt_poll_group_000" 00:15:42.915 } 00:15:42.915 ]' 00:15:42.915 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.915 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.915 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.173 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:43.173 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.173 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.173 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.174 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.432 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:15:43.998 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.998 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:43.998 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.998 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.998 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.998 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.998 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.998 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:44.565 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:44.565 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.565 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:44.565 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:44.565 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:44.565 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.565 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:15:44.565 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.565 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.565 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.565 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:44.565 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:45.131 00:15:45.132 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.132 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.132 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.390 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.390 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.390 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.390 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.390 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.390 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.390 { 00:15:45.390 "auth": { 00:15:45.390 "dhgroup": "ffdhe8192", 00:15:45.390 "digest": "sha384", 00:15:45.390 "state": "completed" 00:15:45.390 }, 00:15:45.390 "cntlid": 95, 00:15:45.390 "listen_address": { 00:15:45.390 "adrfam": "IPv4", 00:15:45.390 "traddr": "10.0.0.2", 00:15:45.390 "trsvcid": "4420", 00:15:45.390 "trtype": "TCP" 00:15:45.390 }, 00:15:45.390 "peer_address": { 00:15:45.390 "adrfam": "IPv4", 00:15:45.390 "traddr": "10.0.0.1", 00:15:45.390 "trsvcid": "54040", 00:15:45.390 "trtype": "TCP" 00:15:45.390 }, 00:15:45.390 "qid": 0, 00:15:45.390 "state": "enabled", 00:15:45.390 "thread": "nvmf_tgt_poll_group_000" 00:15:45.390 } 00:15:45.390 ]' 00:15:45.390 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.390 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.390 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.648 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.648 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.648 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.648 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.648 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.905 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:15:46.838 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.838 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:46.838 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.838 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.838 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.838 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:46.838 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.838 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.838 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:46.838 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:47.096 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:47.096 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.096 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:47.096 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:47.096 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:47.096 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.096 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.096 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.096 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.096 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.096 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.096 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.354 00:15:47.354 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.354 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.354 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.611 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.611 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.611 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.611 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.611 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.611 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.611 { 00:15:47.611 "auth": { 00:15:47.611 "dhgroup": "null", 00:15:47.611 "digest": "sha512", 00:15:47.611 "state": "completed" 00:15:47.611 }, 00:15:47.611 "cntlid": 97, 00:15:47.611 "listen_address": { 00:15:47.611 "adrfam": "IPv4", 00:15:47.611 "traddr": "10.0.0.2", 00:15:47.611 "trsvcid": "4420", 00:15:47.611 "trtype": "TCP" 00:15:47.611 }, 00:15:47.611 "peer_address": { 00:15:47.611 "adrfam": "IPv4", 00:15:47.611 "traddr": "10.0.0.1", 00:15:47.611 "trsvcid": "54058", 00:15:47.611 "trtype": "TCP" 00:15:47.611 }, 00:15:47.611 "qid": 0, 00:15:47.611 "state": "enabled", 00:15:47.611 "thread": "nvmf_tgt_poll_group_000" 00:15:47.611 } 00:15:47.611 ]' 00:15:47.611 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.611 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.611 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.882 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:47.882 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.882 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.882 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.882 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.152 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:15:49.087 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.087 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:49.087 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.087 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.087 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.087 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.087 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:49.087 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:49.087 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:49.087 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.087 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:49.087 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:49.087 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:49.087 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.087 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.087 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.087 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.087 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.087 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.087 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.653 00:15:49.653 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.653 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.653 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.911 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.911 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.911 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.911 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.911 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.911 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.912 { 00:15:49.912 "auth": { 00:15:49.912 "dhgroup": "null", 00:15:49.912 "digest": "sha512", 00:15:49.912 "state": "completed" 00:15:49.912 }, 00:15:49.912 "cntlid": 99, 00:15:49.912 "listen_address": { 00:15:49.912 "adrfam": "IPv4", 00:15:49.912 "traddr": "10.0.0.2", 00:15:49.912 "trsvcid": "4420", 00:15:49.912 "trtype": "TCP" 00:15:49.912 }, 00:15:49.912 "peer_address": { 00:15:49.912 "adrfam": "IPv4", 00:15:49.912 "traddr": "10.0.0.1", 00:15:49.912 "trsvcid": "51762", 00:15:49.912 "trtype": "TCP" 00:15:49.912 }, 00:15:49.912 "qid": 0, 00:15:49.912 "state": "enabled", 00:15:49.912 "thread": "nvmf_tgt_poll_group_000" 00:15:49.912 } 00:15:49.912 ]' 00:15:49.912 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.912 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.912 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.912 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:50.170 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.170 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.170 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.170 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.427 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.362 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.927 00:15:51.927 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.927 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.927 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.198 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.198 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.198 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.198 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.198 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.198 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.198 { 00:15:52.198 "auth": { 00:15:52.198 "dhgroup": "null", 00:15:52.198 "digest": "sha512", 00:15:52.198 "state": "completed" 00:15:52.198 }, 00:15:52.198 "cntlid": 101, 00:15:52.198 "listen_address": { 00:15:52.198 "adrfam": "IPv4", 00:15:52.198 "traddr": "10.0.0.2", 00:15:52.198 "trsvcid": "4420", 00:15:52.198 "trtype": "TCP" 00:15:52.198 }, 00:15:52.198 "peer_address": { 00:15:52.198 "adrfam": "IPv4", 00:15:52.198 "traddr": "10.0.0.1", 00:15:52.198 "trsvcid": "51782", 00:15:52.198 "trtype": "TCP" 00:15:52.198 }, 00:15:52.198 "qid": 0, 00:15:52.198 "state": "enabled", 00:15:52.198 "thread": "nvmf_tgt_poll_group_000" 00:15:52.198 } 00:15:52.198 ]' 00:15:52.198 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.198 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.198 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.198 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:52.198 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.198 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.198 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.198 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.461 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:15:53.394 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.394 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:53.394 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.394 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.394 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.394 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.394 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:53.394 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:53.652 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:53.652 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.652 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:53.652 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:53.652 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:53.652 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.652 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:15:53.653 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.653 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.653 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.653 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:53.653 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:53.911 00:15:53.911 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.911 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.911 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.169 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.169 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.169 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.169 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.169 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.169 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.169 { 00:15:54.169 "auth": { 00:15:54.169 "dhgroup": "null", 00:15:54.169 "digest": "sha512", 00:15:54.169 "state": "completed" 00:15:54.169 }, 00:15:54.169 "cntlid": 103, 00:15:54.169 "listen_address": { 00:15:54.169 "adrfam": "IPv4", 00:15:54.169 "traddr": "10.0.0.2", 00:15:54.169 "trsvcid": "4420", 00:15:54.169 "trtype": "TCP" 00:15:54.169 }, 00:15:54.169 "peer_address": { 00:15:54.169 "adrfam": "IPv4", 00:15:54.169 "traddr": "10.0.0.1", 00:15:54.169 "trsvcid": "51820", 00:15:54.169 "trtype": "TCP" 00:15:54.169 }, 00:15:54.169 "qid": 0, 00:15:54.169 "state": "enabled", 00:15:54.169 "thread": "nvmf_tgt_poll_group_000" 00:15:54.169 } 00:15:54.169 ]' 00:15:54.169 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.427 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:54.427 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.427 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:54.428 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.428 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.428 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.428 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.686 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:15:55.254 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.254 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:55.254 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:55.254 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.254 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:55.254 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:55.821 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:55.821 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.821 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:55.821 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:55.821 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:55.821 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.821 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.821 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.821 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.821 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.821 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.821 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.079 00:15:56.079 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.079 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.079 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.336 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.336 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.336 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.336 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.336 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.336 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.336 { 00:15:56.336 "auth": { 00:15:56.336 "dhgroup": "ffdhe2048", 00:15:56.336 "digest": "sha512", 00:15:56.336 "state": "completed" 00:15:56.336 }, 00:15:56.336 "cntlid": 105, 00:15:56.336 "listen_address": { 00:15:56.336 "adrfam": "IPv4", 00:15:56.336 "traddr": "10.0.0.2", 00:15:56.336 "trsvcid": "4420", 00:15:56.336 "trtype": "TCP" 00:15:56.336 }, 00:15:56.336 "peer_address": { 00:15:56.336 "adrfam": "IPv4", 00:15:56.336 "traddr": "10.0.0.1", 00:15:56.336 "trsvcid": "51854", 00:15:56.336 "trtype": "TCP" 00:15:56.336 }, 00:15:56.336 "qid": 0, 00:15:56.336 "state": "enabled", 00:15:56.336 "thread": "nvmf_tgt_poll_group_000" 00:15:56.336 } 00:15:56.336 ]' 00:15:56.336 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.336 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:56.336 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.336 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:56.336 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.594 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.594 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.594 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.852 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.785 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.389 00:15:58.389 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.389 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.389 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.647 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.647 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.647 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.647 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.647 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.647 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.647 { 00:15:58.647 "auth": { 00:15:58.647 "dhgroup": "ffdhe2048", 00:15:58.647 "digest": "sha512", 00:15:58.647 "state": "completed" 00:15:58.647 }, 00:15:58.647 "cntlid": 107, 00:15:58.647 "listen_address": { 00:15:58.647 "adrfam": "IPv4", 00:15:58.647 "traddr": "10.0.0.2", 00:15:58.647 "trsvcid": "4420", 00:15:58.647 "trtype": "TCP" 00:15:58.647 }, 00:15:58.647 "peer_address": { 00:15:58.647 "adrfam": "IPv4", 00:15:58.647 "traddr": "10.0.0.1", 00:15:58.647 "trsvcid": "51890", 00:15:58.647 "trtype": "TCP" 00:15:58.647 }, 00:15:58.647 "qid": 0, 00:15:58.647 "state": "enabled", 00:15:58.647 "thread": "nvmf_tgt_poll_group_000" 00:15:58.647 } 00:15:58.647 ]' 00:15:58.647 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.647 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.647 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.647 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.647 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.647 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.647 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.647 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.214 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:15:59.781 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.781 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:15:59.781 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.781 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.781 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.781 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.781 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.781 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:00.040 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:00.040 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.040 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:00.040 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:00.040 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:00.040 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.040 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.040 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.040 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.040 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.040 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.040 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.607 00:16:00.607 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.607 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.607 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.866 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.866 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.866 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.866 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.866 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.866 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.866 { 00:16:00.866 "auth": { 00:16:00.866 "dhgroup": "ffdhe2048", 00:16:00.866 "digest": "sha512", 00:16:00.866 "state": "completed" 00:16:00.866 }, 00:16:00.866 "cntlid": 109, 00:16:00.866 "listen_address": { 00:16:00.866 "adrfam": "IPv4", 00:16:00.866 "traddr": "10.0.0.2", 00:16:00.866 "trsvcid": "4420", 00:16:00.866 "trtype": "TCP" 00:16:00.866 }, 00:16:00.866 "peer_address": { 00:16:00.866 "adrfam": "IPv4", 00:16:00.866 "traddr": "10.0.0.1", 00:16:00.866 "trsvcid": "59754", 00:16:00.866 "trtype": "TCP" 00:16:00.866 }, 00:16:00.866 "qid": 0, 00:16:00.866 "state": "enabled", 00:16:00.866 "thread": "nvmf_tgt_poll_group_000" 00:16:00.866 } 00:16:00.866 ]' 00:16:00.866 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.866 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.866 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.866 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:00.866 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.866 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.866 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.866 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.432 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:16:02.018 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.018 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:02.018 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.018 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.018 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.018 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.018 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:02.018 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:02.276 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:02.276 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.276 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:02.276 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:02.276 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:02.276 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.276 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:16:02.276 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.276 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.276 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.276 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:02.276 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:02.535 00:16:02.793 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.793 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.793 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.793 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.793 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.793 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.793 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.051 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.051 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.051 { 00:16:03.051 "auth": { 00:16:03.051 "dhgroup": "ffdhe2048", 00:16:03.051 "digest": "sha512", 00:16:03.051 "state": "completed" 00:16:03.051 }, 00:16:03.051 "cntlid": 111, 00:16:03.051 "listen_address": { 00:16:03.051 "adrfam": "IPv4", 00:16:03.051 "traddr": "10.0.0.2", 00:16:03.051 "trsvcid": "4420", 00:16:03.051 "trtype": "TCP" 00:16:03.051 }, 00:16:03.051 "peer_address": { 00:16:03.051 "adrfam": "IPv4", 00:16:03.051 "traddr": "10.0.0.1", 00:16:03.051 "trsvcid": "59794", 00:16:03.051 "trtype": "TCP" 00:16:03.051 }, 00:16:03.051 "qid": 0, 00:16:03.051 "state": "enabled", 00:16:03.051 "thread": "nvmf_tgt_poll_group_000" 00:16:03.051 } 00:16:03.051 ]' 00:16:03.051 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.051 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.051 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.051 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:03.051 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.051 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.051 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.051 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.310 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:16:04.245 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.245 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:04.245 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.245 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.245 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.245 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.245 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.245 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:04.245 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:04.245 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:04.245 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.245 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:04.245 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:04.245 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:04.245 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.246 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.246 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.246 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.246 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.246 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.246 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.813 00:16:04.813 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.813 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.813 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.071 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.071 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.071 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.071 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.071 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.071 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.071 { 00:16:05.071 "auth": { 00:16:05.071 "dhgroup": "ffdhe3072", 00:16:05.071 "digest": "sha512", 00:16:05.071 "state": "completed" 00:16:05.071 }, 00:16:05.071 "cntlid": 113, 00:16:05.071 "listen_address": { 00:16:05.071 "adrfam": "IPv4", 00:16:05.071 "traddr": "10.0.0.2", 00:16:05.071 "trsvcid": "4420", 00:16:05.071 "trtype": "TCP" 00:16:05.071 }, 00:16:05.071 "peer_address": { 00:16:05.071 "adrfam": "IPv4", 00:16:05.071 "traddr": "10.0.0.1", 00:16:05.071 "trsvcid": "59822", 00:16:05.071 "trtype": "TCP" 00:16:05.071 }, 00:16:05.071 "qid": 0, 00:16:05.071 "state": "enabled", 00:16:05.071 "thread": "nvmf_tgt_poll_group_000" 00:16:05.071 } 00:16:05.071 ]' 00:16:05.071 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.071 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.071 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.071 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.071 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.071 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.071 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.071 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.329 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:16:06.295 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.295 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:06.295 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.295 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.295 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.295 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.295 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:06.295 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:06.553 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:06.553 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.553 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.553 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:06.553 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:06.553 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.553 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.553 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.553 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.553 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.553 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.553 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.811 00:16:06.811 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.811 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.811 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.070 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.070 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.070 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.070 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.070 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.070 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.070 { 00:16:07.070 "auth": { 00:16:07.070 "dhgroup": "ffdhe3072", 00:16:07.070 "digest": "sha512", 00:16:07.070 "state": "completed" 00:16:07.070 }, 00:16:07.070 "cntlid": 115, 00:16:07.070 "listen_address": { 00:16:07.070 "adrfam": "IPv4", 00:16:07.070 "traddr": "10.0.0.2", 00:16:07.070 "trsvcid": "4420", 00:16:07.070 "trtype": "TCP" 00:16:07.070 }, 00:16:07.070 "peer_address": { 00:16:07.070 "adrfam": "IPv4", 00:16:07.070 "traddr": "10.0.0.1", 00:16:07.070 "trsvcid": "59844", 00:16:07.070 "trtype": "TCP" 00:16:07.070 }, 00:16:07.070 "qid": 0, 00:16:07.070 "state": "enabled", 00:16:07.070 "thread": "nvmf_tgt_poll_group_000" 00:16:07.070 } 00:16:07.070 ]' 00:16:07.070 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.328 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.328 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.328 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.328 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.328 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.328 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.329 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.587 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.521 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.089 00:16:09.089 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.089 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.089 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.352 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.352 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.352 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.352 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.352 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.352 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.352 { 00:16:09.352 "auth": { 00:16:09.352 "dhgroup": "ffdhe3072", 00:16:09.352 "digest": "sha512", 00:16:09.352 "state": "completed" 00:16:09.352 }, 00:16:09.352 "cntlid": 117, 00:16:09.352 "listen_address": { 00:16:09.352 "adrfam": "IPv4", 00:16:09.352 "traddr": "10.0.0.2", 00:16:09.352 "trsvcid": "4420", 00:16:09.352 "trtype": "TCP" 00:16:09.352 }, 00:16:09.352 "peer_address": { 00:16:09.352 "adrfam": "IPv4", 00:16:09.352 "traddr": "10.0.0.1", 00:16:09.352 "trsvcid": "55872", 00:16:09.352 "trtype": "TCP" 00:16:09.352 }, 00:16:09.352 "qid": 0, 00:16:09.352 "state": "enabled", 00:16:09.352 "thread": "nvmf_tgt_poll_group_000" 00:16:09.352 } 00:16:09.352 ]' 00:16:09.352 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.352 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.352 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.352 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:09.352 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.352 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.352 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.352 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.633 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:16:10.567 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.568 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:10.568 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.568 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.568 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.568 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.568 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:10.568 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:10.826 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:10.826 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.826 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.826 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:10.826 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:10.826 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.826 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:16:10.826 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.826 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.826 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.826 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:10.826 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.084 00:16:11.084 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.084 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.084 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.341 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.341 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.341 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.342 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.342 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.342 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.342 { 00:16:11.342 "auth": { 00:16:11.342 "dhgroup": "ffdhe3072", 00:16:11.342 "digest": "sha512", 00:16:11.342 "state": "completed" 00:16:11.342 }, 00:16:11.342 "cntlid": 119, 00:16:11.342 "listen_address": { 00:16:11.342 "adrfam": "IPv4", 00:16:11.342 "traddr": "10.0.0.2", 00:16:11.342 "trsvcid": "4420", 00:16:11.342 "trtype": "TCP" 00:16:11.342 }, 00:16:11.342 "peer_address": { 00:16:11.342 "adrfam": "IPv4", 00:16:11.342 "traddr": "10.0.0.1", 00:16:11.342 "trsvcid": "55904", 00:16:11.342 "trtype": "TCP" 00:16:11.342 }, 00:16:11.342 "qid": 0, 00:16:11.342 "state": "enabled", 00:16:11.342 "thread": "nvmf_tgt_poll_group_000" 00:16:11.342 } 00:16:11.342 ]' 00:16:11.342 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.600 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.600 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.600 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:11.600 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.600 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.600 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.600 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.858 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:16:12.793 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.793 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:12.793 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.793 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.793 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.793 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.793 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.793 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:12.793 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:13.054 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:13.054 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.054 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:13.054 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:13.054 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:13.054 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.054 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.054 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.054 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.054 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.054 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.054 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.325 00:16:13.325 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.325 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.325 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.583 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.583 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.583 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.583 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.583 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.583 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.583 { 00:16:13.583 "auth": { 00:16:13.583 "dhgroup": "ffdhe4096", 00:16:13.583 "digest": "sha512", 00:16:13.583 "state": "completed" 00:16:13.583 }, 00:16:13.583 "cntlid": 121, 00:16:13.583 "listen_address": { 00:16:13.583 "adrfam": "IPv4", 00:16:13.583 "traddr": "10.0.0.2", 00:16:13.583 "trsvcid": "4420", 00:16:13.583 "trtype": "TCP" 00:16:13.583 }, 00:16:13.583 "peer_address": { 00:16:13.583 "adrfam": "IPv4", 00:16:13.583 "traddr": "10.0.0.1", 00:16:13.583 "trsvcid": "55916", 00:16:13.583 "trtype": "TCP" 00:16:13.583 }, 00:16:13.583 "qid": 0, 00:16:13.583 "state": "enabled", 00:16:13.583 "thread": "nvmf_tgt_poll_group_000" 00:16:13.583 } 00:16:13.583 ]' 00:16:13.583 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.583 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.583 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.583 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:13.583 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.841 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.841 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.841 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.099 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:16:14.665 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.665 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:14.665 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.665 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.665 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.665 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.665 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:14.665 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:14.923 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:14.923 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.923 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:14.923 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:14.923 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:14.923 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.923 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.923 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.923 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.923 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.923 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.923 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.488 00:16:15.488 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.488 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.488 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.747 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.747 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.747 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.747 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.747 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.747 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.747 { 00:16:15.747 "auth": { 00:16:15.747 "dhgroup": "ffdhe4096", 00:16:15.747 "digest": "sha512", 00:16:15.747 "state": "completed" 00:16:15.747 }, 00:16:15.747 "cntlid": 123, 00:16:15.747 "listen_address": { 00:16:15.747 "adrfam": "IPv4", 00:16:15.747 "traddr": "10.0.0.2", 00:16:15.747 "trsvcid": "4420", 00:16:15.747 "trtype": "TCP" 00:16:15.747 }, 00:16:15.747 "peer_address": { 00:16:15.747 "adrfam": "IPv4", 00:16:15.747 "traddr": "10.0.0.1", 00:16:15.747 "trsvcid": "55938", 00:16:15.747 "trtype": "TCP" 00:16:15.747 }, 00:16:15.747 "qid": 0, 00:16:15.747 "state": "enabled", 00:16:15.747 "thread": "nvmf_tgt_poll_group_000" 00:16:15.747 } 00:16:15.747 ]' 00:16:15.747 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.747 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.747 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.747 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:15.747 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.747 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.747 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.747 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.313 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:16:16.878 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.878 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:16.878 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.878 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.878 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.879 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.879 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:16.879 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:17.136 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:17.136 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.136 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:17.136 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:17.136 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:17.136 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.136 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.136 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.136 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.136 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.136 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.136 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.398 00:16:17.656 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.656 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.656 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.916 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.916 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.916 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.916 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.916 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.916 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.916 { 00:16:17.916 "auth": { 00:16:17.916 "dhgroup": "ffdhe4096", 00:16:17.916 "digest": "sha512", 00:16:17.916 "state": "completed" 00:16:17.916 }, 00:16:17.916 "cntlid": 125, 00:16:17.916 "listen_address": { 00:16:17.916 "adrfam": "IPv4", 00:16:17.916 "traddr": "10.0.0.2", 00:16:17.916 "trsvcid": "4420", 00:16:17.916 "trtype": "TCP" 00:16:17.916 }, 00:16:17.916 "peer_address": { 00:16:17.916 "adrfam": "IPv4", 00:16:17.916 "traddr": "10.0.0.1", 00:16:17.916 "trsvcid": "55958", 00:16:17.916 "trtype": "TCP" 00:16:17.916 }, 00:16:17.916 "qid": 0, 00:16:17.916 "state": "enabled", 00:16:17.916 "thread": "nvmf_tgt_poll_group_000" 00:16:17.916 } 00:16:17.916 ]' 00:16:17.916 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.916 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.916 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.916 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:17.916 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.916 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.916 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.916 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.482 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:16:19.050 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.050 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:19.050 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.050 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.050 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.050 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.050 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:19.050 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:19.308 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:19.308 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.308 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.308 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:19.308 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:19.308 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.308 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:16:19.308 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.308 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.308 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.308 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:19.308 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:19.873 00:16:19.873 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.873 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.873 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.131 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.131 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.131 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.131 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.131 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.131 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.131 { 00:16:20.131 "auth": { 00:16:20.131 "dhgroup": "ffdhe4096", 00:16:20.131 "digest": "sha512", 00:16:20.131 "state": "completed" 00:16:20.131 }, 00:16:20.131 "cntlid": 127, 00:16:20.131 "listen_address": { 00:16:20.131 "adrfam": "IPv4", 00:16:20.131 "traddr": "10.0.0.2", 00:16:20.131 "trsvcid": "4420", 00:16:20.131 "trtype": "TCP" 00:16:20.131 }, 00:16:20.131 "peer_address": { 00:16:20.131 "adrfam": "IPv4", 00:16:20.131 "traddr": "10.0.0.1", 00:16:20.131 "trsvcid": "47006", 00:16:20.131 "trtype": "TCP" 00:16:20.131 }, 00:16:20.131 "qid": 0, 00:16:20.131 "state": "enabled", 00:16:20.131 "thread": "nvmf_tgt_poll_group_000" 00:16:20.131 } 00:16:20.131 ]' 00:16:20.131 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.131 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.131 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.131 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:20.131 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.131 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.131 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.131 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.389 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:16:21.323 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.323 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:21.323 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.323 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.323 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.323 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.323 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.323 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:21.323 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:21.581 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:21.581 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.581 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:21.582 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:21.582 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:21.582 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.582 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.582 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.582 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.582 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.582 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.582 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.149 00:16:22.149 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.149 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.149 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.407 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.407 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.407 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.407 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.407 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.407 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.407 { 00:16:22.407 "auth": { 00:16:22.407 "dhgroup": "ffdhe6144", 00:16:22.407 "digest": "sha512", 00:16:22.407 "state": "completed" 00:16:22.407 }, 00:16:22.407 "cntlid": 129, 00:16:22.407 "listen_address": { 00:16:22.408 "adrfam": "IPv4", 00:16:22.408 "traddr": "10.0.0.2", 00:16:22.408 "trsvcid": "4420", 00:16:22.408 "trtype": "TCP" 00:16:22.408 }, 00:16:22.408 "peer_address": { 00:16:22.408 "adrfam": "IPv4", 00:16:22.408 "traddr": "10.0.0.1", 00:16:22.408 "trsvcid": "47044", 00:16:22.408 "trtype": "TCP" 00:16:22.408 }, 00:16:22.408 "qid": 0, 00:16:22.408 "state": "enabled", 00:16:22.408 "thread": "nvmf_tgt_poll_group_000" 00:16:22.408 } 00:16:22.408 ]' 00:16:22.408 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.408 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.408 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.408 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:22.408 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.666 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.666 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.666 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.924 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:16:23.491 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.491 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:23.491 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.491 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.491 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.491 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.491 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:23.491 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:23.749 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:23.749 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.749 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:23.749 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:23.749 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:23.749 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.749 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.749 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.749 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.749 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.749 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.749 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.317 00:16:24.576 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.576 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.576 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.835 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.835 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.835 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.835 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.835 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.835 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.835 { 00:16:24.835 "auth": { 00:16:24.835 "dhgroup": "ffdhe6144", 00:16:24.835 "digest": "sha512", 00:16:24.835 "state": "completed" 00:16:24.835 }, 00:16:24.835 "cntlid": 131, 00:16:24.835 "listen_address": { 00:16:24.835 "adrfam": "IPv4", 00:16:24.835 "traddr": "10.0.0.2", 00:16:24.835 "trsvcid": "4420", 00:16:24.835 "trtype": "TCP" 00:16:24.835 }, 00:16:24.835 "peer_address": { 00:16:24.835 "adrfam": "IPv4", 00:16:24.835 "traddr": "10.0.0.1", 00:16:24.835 "trsvcid": "47064", 00:16:24.835 "trtype": "TCP" 00:16:24.835 }, 00:16:24.835 "qid": 0, 00:16:24.835 "state": "enabled", 00:16:24.835 "thread": "nvmf_tgt_poll_group_000" 00:16:24.835 } 00:16:24.835 ]' 00:16:24.835 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.835 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.835 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.835 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:24.835 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.835 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.835 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.835 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.099 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:16:26.034 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.034 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:26.034 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.034 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.034 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.034 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.034 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:26.034 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:26.293 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:26.293 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.293 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.293 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:26.293 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:26.293 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.293 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.293 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.293 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.293 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.293 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.293 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.859 00:16:26.859 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.859 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.859 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.859 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.859 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.859 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.859 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.117 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.117 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.117 { 00:16:27.117 "auth": { 00:16:27.117 "dhgroup": "ffdhe6144", 00:16:27.117 "digest": "sha512", 00:16:27.117 "state": "completed" 00:16:27.117 }, 00:16:27.117 "cntlid": 133, 00:16:27.117 "listen_address": { 00:16:27.117 "adrfam": "IPv4", 00:16:27.117 "traddr": "10.0.0.2", 00:16:27.117 "trsvcid": "4420", 00:16:27.117 "trtype": "TCP" 00:16:27.117 }, 00:16:27.117 "peer_address": { 00:16:27.117 "adrfam": "IPv4", 00:16:27.117 "traddr": "10.0.0.1", 00:16:27.117 "trsvcid": "47094", 00:16:27.117 "trtype": "TCP" 00:16:27.117 }, 00:16:27.117 "qid": 0, 00:16:27.117 "state": "enabled", 00:16:27.117 "thread": "nvmf_tgt_poll_group_000" 00:16:27.117 } 00:16:27.117 ]' 00:16:27.117 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.117 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.117 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.117 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:27.117 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.117 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.117 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.117 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.375 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:16:28.313 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.313 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:28.313 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.313 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.314 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.314 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.314 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:28.314 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:28.571 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:28.571 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.571 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.571 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:28.571 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:28.571 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.571 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:16:28.571 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.571 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.571 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.571 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.571 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:29.138 00:16:29.138 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.138 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.138 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.395 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.395 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.395 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.395 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.395 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.395 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.395 { 00:16:29.395 "auth": { 00:16:29.395 "dhgroup": "ffdhe6144", 00:16:29.395 "digest": "sha512", 00:16:29.395 "state": "completed" 00:16:29.395 }, 00:16:29.395 "cntlid": 135, 00:16:29.395 "listen_address": { 00:16:29.395 "adrfam": "IPv4", 00:16:29.395 "traddr": "10.0.0.2", 00:16:29.395 "trsvcid": "4420", 00:16:29.395 "trtype": "TCP" 00:16:29.395 }, 00:16:29.395 "peer_address": { 00:16:29.395 "adrfam": "IPv4", 00:16:29.395 "traddr": "10.0.0.1", 00:16:29.395 "trsvcid": "54528", 00:16:29.395 "trtype": "TCP" 00:16:29.395 }, 00:16:29.395 "qid": 0, 00:16:29.395 "state": "enabled", 00:16:29.395 "thread": "nvmf_tgt_poll_group_000" 00:16:29.395 } 00:16:29.395 ]' 00:16:29.395 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.395 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.395 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.395 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:29.395 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.653 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.653 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.653 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.911 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.849 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.785 00:16:31.785 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.785 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.785 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.785 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.785 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.785 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.785 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.785 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.785 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.785 { 00:16:31.785 "auth": { 00:16:31.785 "dhgroup": "ffdhe8192", 00:16:31.785 "digest": "sha512", 00:16:31.785 "state": "completed" 00:16:31.785 }, 00:16:31.785 "cntlid": 137, 00:16:31.785 "listen_address": { 00:16:31.785 "adrfam": "IPv4", 00:16:31.785 "traddr": "10.0.0.2", 00:16:31.785 "trsvcid": "4420", 00:16:31.785 "trtype": "TCP" 00:16:31.785 }, 00:16:31.785 "peer_address": { 00:16:31.785 "adrfam": "IPv4", 00:16:31.785 "traddr": "10.0.0.1", 00:16:31.785 "trsvcid": "54546", 00:16:31.785 "trtype": "TCP" 00:16:31.785 }, 00:16:31.785 "qid": 0, 00:16:31.785 "state": "enabled", 00:16:31.785 "thread": "nvmf_tgt_poll_group_000" 00:16:31.785 } 00:16:31.785 ]' 00:16:31.785 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.043 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.043 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.043 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:32.043 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.043 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.043 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.043 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.302 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:16:33.237 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.237 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:33.237 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.237 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.237 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.237 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.237 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:33.237 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:33.237 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:33.237 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.237 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:33.237 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:33.237 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:33.237 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.237 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.237 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.237 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.495 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.495 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.495 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.061 00:16:34.061 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.062 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.062 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.320 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.321 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.321 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.321 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.321 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.321 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.321 { 00:16:34.321 "auth": { 00:16:34.321 "dhgroup": "ffdhe8192", 00:16:34.321 "digest": "sha512", 00:16:34.321 "state": "completed" 00:16:34.321 }, 00:16:34.321 "cntlid": 139, 00:16:34.321 "listen_address": { 00:16:34.321 "adrfam": "IPv4", 00:16:34.321 "traddr": "10.0.0.2", 00:16:34.321 "trsvcid": "4420", 00:16:34.321 "trtype": "TCP" 00:16:34.321 }, 00:16:34.321 "peer_address": { 00:16:34.321 "adrfam": "IPv4", 00:16:34.321 "traddr": "10.0.0.1", 00:16:34.321 "trsvcid": "54564", 00:16:34.321 "trtype": "TCP" 00:16:34.321 }, 00:16:34.321 "qid": 0, 00:16:34.321 "state": "enabled", 00:16:34.321 "thread": "nvmf_tgt_poll_group_000" 00:16:34.321 } 00:16:34.321 ]' 00:16:34.321 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.321 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.321 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.321 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:34.321 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.580 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.580 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.580 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.838 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:01:YTIwNTRkMGUyNGEyYzE1Yjg1YzlmNDVhMDFkYTNkMjck1dQ1: --dhchap-ctrl-secret DHHC-1:02:MTk0MjRiNDIzNGI3ZmZkZDUxZWI4YjQyYWZiMWIxNmIzZTg0NWQ0ZGJkZjFlM2ZhAda6Ag==: 00:16:35.405 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.405 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:35.405 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.405 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.405 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.405 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.405 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:35.405 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:35.996 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:35.996 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.996 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.996 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:35.996 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:35.996 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.996 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.996 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.996 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.996 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.996 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.997 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.563 00:16:36.563 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.563 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.563 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.821 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.821 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.821 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.821 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.821 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.821 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.821 { 00:16:36.821 "auth": { 00:16:36.821 "dhgroup": "ffdhe8192", 00:16:36.821 "digest": "sha512", 00:16:36.821 "state": "completed" 00:16:36.821 }, 00:16:36.821 "cntlid": 141, 00:16:36.821 "listen_address": { 00:16:36.821 "adrfam": "IPv4", 00:16:36.821 "traddr": "10.0.0.2", 00:16:36.821 "trsvcid": "4420", 00:16:36.821 "trtype": "TCP" 00:16:36.821 }, 00:16:36.821 "peer_address": { 00:16:36.821 "adrfam": "IPv4", 00:16:36.821 "traddr": "10.0.0.1", 00:16:36.821 "trsvcid": "54582", 00:16:36.821 "trtype": "TCP" 00:16:36.821 }, 00:16:36.821 "qid": 0, 00:16:36.821 "state": "enabled", 00:16:36.821 "thread": "nvmf_tgt_poll_group_000" 00:16:36.821 } 00:16:36.821 ]' 00:16:36.821 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.821 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.821 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.821 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:36.821 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.079 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.079 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.079 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.337 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:02:MjI0NTJhYzliOTBjMGE2NjVkMzhjNGZlYzkwMDAwYTA2NmVmOGNhNTRkMWE5MGNiIieK4Q==: --dhchap-ctrl-secret DHHC-1:01:ZWQ4ODcyYTQ0NmM4MTFkZDIzN2IxZmUwODBkZWNiNzYWM+kq: 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.271 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:39.204 00:16:39.204 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.204 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.204 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.467 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.467 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.467 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.467 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.468 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.468 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.468 { 00:16:39.468 "auth": { 00:16:39.468 "dhgroup": "ffdhe8192", 00:16:39.468 "digest": "sha512", 00:16:39.468 "state": "completed" 00:16:39.468 }, 00:16:39.468 "cntlid": 143, 00:16:39.468 "listen_address": { 00:16:39.468 "adrfam": "IPv4", 00:16:39.468 "traddr": "10.0.0.2", 00:16:39.468 "trsvcid": "4420", 00:16:39.468 "trtype": "TCP" 00:16:39.468 }, 00:16:39.468 "peer_address": { 00:16:39.468 "adrfam": "IPv4", 00:16:39.468 "traddr": "10.0.0.1", 00:16:39.468 "trsvcid": "34684", 00:16:39.468 "trtype": "TCP" 00:16:39.468 }, 00:16:39.468 "qid": 0, 00:16:39.468 "state": "enabled", 00:16:39.468 "thread": "nvmf_tgt_poll_group_000" 00:16:39.468 } 00:16:39.468 ]' 00:16:39.468 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.468 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.468 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.468 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:39.468 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.725 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.725 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.725 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.983 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:16:40.917 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.918 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:40.918 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.918 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.918 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.918 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:40.918 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:40.918 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:40.918 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:40.918 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:40.918 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:40.918 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:40.918 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.918 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:40.918 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:40.918 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:40.918 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.918 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.918 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.918 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.176 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.176 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.176 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.741 00:16:41.999 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.999 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.999 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.256 05:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.256 05:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.256 05:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.256 05:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.256 05:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.256 05:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.256 { 00:16:42.256 "auth": { 00:16:42.256 "dhgroup": "ffdhe8192", 00:16:42.256 "digest": "sha512", 00:16:42.256 "state": "completed" 00:16:42.256 }, 00:16:42.256 "cntlid": 145, 00:16:42.256 "listen_address": { 00:16:42.256 "adrfam": "IPv4", 00:16:42.256 "traddr": "10.0.0.2", 00:16:42.256 "trsvcid": "4420", 00:16:42.256 "trtype": "TCP" 00:16:42.256 }, 00:16:42.256 "peer_address": { 00:16:42.256 "adrfam": "IPv4", 00:16:42.256 "traddr": "10.0.0.1", 00:16:42.256 "trsvcid": "34698", 00:16:42.256 "trtype": "TCP" 00:16:42.256 }, 00:16:42.256 "qid": 0, 00:16:42.256 "state": "enabled", 00:16:42.256 "thread": "nvmf_tgt_poll_group_000" 00:16:42.256 } 00:16:42.256 ]' 00:16:42.257 05:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.257 05:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.257 05:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.257 05:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:42.257 05:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.257 05:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.257 05:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.257 05:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.514 05:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:00:MTQ5MjlkM2UyYzIxNWQ4OTRmZGExODYyMjk5Yzk5MjNhOTA4MTgwMjRiOGE3NjkykdUksQ==: --dhchap-ctrl-secret DHHC-1:03:MjQyMjNlZjU1MWJiMjcxNjkzYmQ1ZDc2NGIyZjFlNWFlYWRmYjFjY2QwNjk0MDA2OWQ4NzhiNzdlODc5M2RiNhgXIQQ=: 00:16:43.446 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.446 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:43.446 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.446 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.446 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.446 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 00:16:43.446 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.446 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.446 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.446 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:43.446 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:43.446 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:43.446 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:43.446 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:43.446 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:43.447 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:43.447 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:43.447 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:44.012 2024/07/25 05:22:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:44.012 request: 00:16:44.012 { 00:16:44.012 "method": "bdev_nvme_attach_controller", 00:16:44.012 "params": { 00:16:44.012 "name": "nvme0", 00:16:44.012 "trtype": "tcp", 00:16:44.012 "traddr": "10.0.0.2", 00:16:44.012 "adrfam": "ipv4", 00:16:44.012 "trsvcid": "4420", 00:16:44.012 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:44.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78", 00:16:44.012 "prchk_reftag": false, 00:16:44.012 "prchk_guard": false, 00:16:44.012 "hdgst": false, 00:16:44.012 "ddgst": false, 00:16:44.012 "dhchap_key": "key2" 00:16:44.012 } 00:16:44.012 } 00:16:44.012 Got JSON-RPC error response 00:16:44.012 GoRPCClient: error on JSON-RPC call 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:44.012 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:44.577 2024/07/25 05:22:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:44.577 request: 00:16:44.577 { 00:16:44.577 "method": "bdev_nvme_attach_controller", 00:16:44.577 "params": { 00:16:44.577 "name": "nvme0", 00:16:44.577 "trtype": "tcp", 00:16:44.577 "traddr": "10.0.0.2", 00:16:44.577 "adrfam": "ipv4", 00:16:44.577 "trsvcid": "4420", 00:16:44.577 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:44.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78", 00:16:44.577 "prchk_reftag": false, 00:16:44.577 "prchk_guard": false, 00:16:44.577 "hdgst": false, 00:16:44.577 "ddgst": false, 00:16:44.577 "dhchap_key": "key1", 00:16:44.577 "dhchap_ctrlr_key": "ckey2" 00:16:44.577 } 00:16:44.577 } 00:16:44.577 Got JSON-RPC error response 00:16:44.577 GoRPCClient: error on JSON-RPC call 00:16:44.577 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:44.577 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:44.577 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:44.577 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:44.577 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:44.577 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.577 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.836 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.836 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key1 00:16:44.836 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.836 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.836 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.836 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.836 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:44.836 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.836 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:44.836 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.836 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:44.836 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.836 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.836 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.402 2024/07/25 05:22:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:45.402 request: 00:16:45.402 { 00:16:45.402 "method": "bdev_nvme_attach_controller", 00:16:45.402 "params": { 00:16:45.402 "name": "nvme0", 00:16:45.402 "trtype": "tcp", 00:16:45.402 "traddr": "10.0.0.2", 00:16:45.402 "adrfam": "ipv4", 00:16:45.402 "trsvcid": "4420", 00:16:45.402 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:45.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78", 00:16:45.402 "prchk_reftag": false, 00:16:45.402 "prchk_guard": false, 00:16:45.402 "hdgst": false, 00:16:45.402 "ddgst": false, 00:16:45.402 "dhchap_key": "key1", 00:16:45.402 "dhchap_ctrlr_key": "ckey1" 00:16:45.402 } 00:16:45.402 } 00:16:45.402 Got JSON-RPC error response 00:16:45.402 GoRPCClient: error on JSON-RPC call 00:16:45.402 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:45.402 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.402 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:45.402 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.402 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:45.403 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.403 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.403 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.403 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 77053 00:16:45.403 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 77053 ']' 00:16:45.403 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 77053 00:16:45.403 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:45.403 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:45.403 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77053 00:16:45.403 killing process with pid 77053 00:16:45.403 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:45.403 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:45.403 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77053' 00:16:45.403 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 77053 00:16:45.403 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 77053 00:16:45.661 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:45.661 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:45.661 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:45.661 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.661 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=82071 00:16:45.661 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 82071 00:16:45.661 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 82071 ']' 00:16:45.661 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.661 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:45.661 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:45.661 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.661 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:45.661 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.920 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:45.920 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:45.920 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:45.920 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:45.920 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.920 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.920 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:45.920 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 82071 00:16:45.920 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 82071 ']' 00:16:45.920 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.920 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:45.920 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.920 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:45.920 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.179 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:46.179 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:46.179 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:46.179 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.179 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.438 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.438 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:46.438 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.438 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:46.438 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:46.438 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:46.438 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.438 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:16:46.438 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.438 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.438 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.438 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.438 05:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.005 00:16:47.005 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.005 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.005 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.263 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.263 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.263 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.263 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.263 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.263 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.263 { 00:16:47.263 "auth": { 00:16:47.263 "dhgroup": "ffdhe8192", 00:16:47.263 "digest": "sha512", 00:16:47.263 "state": "completed" 00:16:47.263 }, 00:16:47.263 "cntlid": 1, 00:16:47.263 "listen_address": { 00:16:47.263 "adrfam": "IPv4", 00:16:47.263 "traddr": "10.0.0.2", 00:16:47.263 "trsvcid": "4420", 00:16:47.263 "trtype": "TCP" 00:16:47.263 }, 00:16:47.263 "peer_address": { 00:16:47.263 "adrfam": "IPv4", 00:16:47.263 "traddr": "10.0.0.1", 00:16:47.263 "trsvcid": "34762", 00:16:47.263 "trtype": "TCP" 00:16:47.263 }, 00:16:47.263 "qid": 0, 00:16:47.263 "state": "enabled", 00:16:47.263 "thread": "nvmf_tgt_poll_group_000" 00:16:47.263 } 00:16:47.263 ]' 00:16:47.263 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.263 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.263 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.522 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.522 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.522 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.522 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.522 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.781 05:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid 4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-secret DHHC-1:03:NjM2OTc1OTIzOGE5NzM3N2Y3NzVkMDIwNmE3NTk3MDJlYWNkNjczYTRkYjRmNzZkMmFkOWM2YzA4OGE0N2ExYvhWAxE=: 00:16:48.715 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.715 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:48.715 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.715 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.715 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.715 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --dhchap-key key3 00:16:48.715 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.715 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.715 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.715 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:48.715 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:48.972 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:48.972 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:48.972 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:48.972 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:48.972 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:48.972 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:48.972 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:48.972 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:48.972 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.230 2024/07/25 05:22:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:49.230 request: 00:16:49.230 { 00:16:49.230 "method": "bdev_nvme_attach_controller", 00:16:49.230 "params": { 00:16:49.230 "name": "nvme0", 00:16:49.230 "trtype": "tcp", 00:16:49.230 "traddr": "10.0.0.2", 00:16:49.230 "adrfam": "ipv4", 00:16:49.230 "trsvcid": "4420", 00:16:49.230 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:49.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78", 00:16:49.230 "prchk_reftag": false, 00:16:49.230 "prchk_guard": false, 00:16:49.230 "hdgst": false, 00:16:49.230 "ddgst": false, 00:16:49.230 "dhchap_key": "key3" 00:16:49.230 } 00:16:49.230 } 00:16:49.230 Got JSON-RPC error response 00:16:49.230 GoRPCClient: error on JSON-RPC call 00:16:49.230 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:49.230 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:49.230 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:49.230 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:49.230 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:16:49.230 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:16:49.230 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:49.230 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:49.487 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.487 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:49.487 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.487 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:49.487 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:49.487 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:49.487 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:49.487 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.487 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.745 2024/07/25 05:22:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:49.745 request: 00:16:49.745 { 00:16:49.745 "method": "bdev_nvme_attach_controller", 00:16:49.745 "params": { 00:16:49.745 "name": "nvme0", 00:16:49.745 "trtype": "tcp", 00:16:49.745 "traddr": "10.0.0.2", 00:16:49.745 "adrfam": "ipv4", 00:16:49.745 "trsvcid": "4420", 00:16:49.745 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:49.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78", 00:16:49.745 "prchk_reftag": false, 00:16:49.745 "prchk_guard": false, 00:16:49.745 "hdgst": false, 00:16:49.745 "ddgst": false, 00:16:49.745 "dhchap_key": "key3" 00:16:49.745 } 00:16:49.745 } 00:16:49.745 Got JSON-RPC error response 00:16:49.745 GoRPCClient: error on JSON-RPC call 00:16:49.745 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:49.745 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:49.745 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:49.745 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:49.745 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:49.745 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:16:49.745 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:49.745 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:49.745 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:49.745 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:50.002 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:50.002 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.002 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.002 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.002 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:50.002 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.002 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.002 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.002 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:50.002 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:50.003 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:50.003 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:50.003 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:50.003 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:50.003 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:50.003 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:50.003 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:50.568 2024/07/25 05:22:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:50.568 request: 00:16:50.568 { 00:16:50.568 "method": "bdev_nvme_attach_controller", 00:16:50.568 "params": { 00:16:50.568 "name": "nvme0", 00:16:50.568 "trtype": "tcp", 00:16:50.568 "traddr": "10.0.0.2", 00:16:50.568 "adrfam": "ipv4", 00:16:50.568 "trsvcid": "4420", 00:16:50.568 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:50.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78", 00:16:50.568 "prchk_reftag": false, 00:16:50.568 "prchk_guard": false, 00:16:50.568 "hdgst": false, 00:16:50.568 "ddgst": false, 00:16:50.568 "dhchap_key": "key0", 00:16:50.568 "dhchap_ctrlr_key": "key1" 00:16:50.568 } 00:16:50.568 } 00:16:50.568 Got JSON-RPC error response 00:16:50.568 GoRPCClient: error on JSON-RPC call 00:16:50.568 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:50.568 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:50.568 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:50.568 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:50.568 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:50.568 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:50.568 00:16:50.826 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:16:50.826 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.826 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:16:51.084 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.084 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.084 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.084 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:16:51.084 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:16:51.084 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77103 00:16:51.084 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 77103 ']' 00:16:51.084 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 77103 00:16:51.342 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:51.342 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:51.342 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77103 00:16:51.342 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:51.342 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:51.342 killing process with pid 77103 00:16:51.342 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77103' 00:16:51.342 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 77103 00:16:51.342 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 77103 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:51.602 rmmod nvme_tcp 00:16:51.602 rmmod nvme_fabrics 00:16:51.602 rmmod nvme_keyring 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 82071 ']' 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 82071 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 82071 ']' 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 82071 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82071 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:51.602 killing process with pid 82071 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82071' 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 82071 00:16:51.602 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 82071 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.uDA /tmp/spdk.key-sha256.UoM /tmp/spdk.key-sha384.YMh /tmp/spdk.key-sha512.4oU /tmp/spdk.key-sha512.tBz /tmp/spdk.key-sha384.mSq /tmp/spdk.key-sha256.QMI '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:16:51.880 00:16:51.880 real 3m2.850s 00:16:51.880 user 7m24.986s 00:16:51.880 sys 0m21.692s 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.880 ************************************ 00:16:51.880 END TEST nvmf_auth_target 00:16:51.880 ************************************ 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:51.880 ************************************ 00:16:51.880 START TEST nvmf_bdevio_no_huge 00:16:51.880 ************************************ 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:51.880 * Looking for test storage... 00:16:51.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.880 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:51.881 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:51.881 Cannot find device "nvmf_tgt_br" 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:51.881 Cannot find device "nvmf_tgt_br2" 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:51.881 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:52.139 Cannot find device "nvmf_tgt_br" 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:52.139 Cannot find device "nvmf_tgt_br2" 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:52.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:52.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:52.139 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:52.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:16:52.397 00:16:52.397 --- 10.0.0.2 ping statistics --- 00:16:52.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.397 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:52.397 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:52.397 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:16:52.397 00:16:52.397 --- 10.0.0.3 ping statistics --- 00:16:52.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.397 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:52.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:52.397 00:16:52.397 --- 10.0.0.1 ping statistics --- 00:16:52.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.397 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:52.397 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=82467 00:16:52.398 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:52.398 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 82467 00:16:52.398 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 82467 ']' 00:16:52.398 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.398 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.398 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.398 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.398 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:52.398 [2024-07-25 05:22:56.455036] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:16:52.398 [2024-07-25 05:22:56.455193] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:52.656 [2024-07-25 05:22:56.627460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:52.656 [2024-07-25 05:22:56.761475] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.656 [2024-07-25 05:22:56.761558] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.656 [2024-07-25 05:22:56.761579] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.656 [2024-07-25 05:22:56.761596] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.656 [2024-07-25 05:22:56.761609] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.656 [2024-07-25 05:22:56.761786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:52.656 [2024-07-25 05:22:56.762737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:52.656 [2024-07-25 05:22:56.762814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:52.656 [2024-07-25 05:22:56.762842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:53.592 [2024-07-25 05:22:57.485361] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:53.592 Malloc0 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:53.592 [2024-07-25 05:22:57.525447] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:53.592 { 00:16:53.592 "params": { 00:16:53.592 "name": "Nvme$subsystem", 00:16:53.592 "trtype": "$TEST_TRANSPORT", 00:16:53.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:53.592 "adrfam": "ipv4", 00:16:53.592 "trsvcid": "$NVMF_PORT", 00:16:53.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:53.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:53.592 "hdgst": ${hdgst:-false}, 00:16:53.592 "ddgst": ${ddgst:-false} 00:16:53.592 }, 00:16:53.592 "method": "bdev_nvme_attach_controller" 00:16:53.592 } 00:16:53.592 EOF 00:16:53.592 )") 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:16:53.592 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:53.592 "params": { 00:16:53.592 "name": "Nvme1", 00:16:53.592 "trtype": "tcp", 00:16:53.592 "traddr": "10.0.0.2", 00:16:53.592 "adrfam": "ipv4", 00:16:53.592 "trsvcid": "4420", 00:16:53.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:53.592 "hdgst": false, 00:16:53.592 "ddgst": false 00:16:53.592 }, 00:16:53.592 "method": "bdev_nvme_attach_controller" 00:16:53.592 }' 00:16:53.592 [2024-07-25 05:22:57.583986] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:16:53.592 [2024-07-25 05:22:57.584079] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82521 ] 00:16:53.592 [2024-07-25 05:22:57.730194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:53.851 [2024-07-25 05:22:57.848883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.851 [2024-07-25 05:22:57.848974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.851 [2024-07-25 05:22:57.848982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.109 I/O targets: 00:16:54.109 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:54.109 00:16:54.109 00:16:54.109 CUnit - A unit testing framework for C - Version 2.1-3 00:16:54.109 http://cunit.sourceforge.net/ 00:16:54.109 00:16:54.109 00:16:54.109 Suite: bdevio tests on: Nvme1n1 00:16:54.109 Test: blockdev write read block ...passed 00:16:54.109 Test: blockdev write zeroes read block ...passed 00:16:54.109 Test: blockdev write zeroes read no split ...passed 00:16:54.109 Test: blockdev write zeroes read split ...passed 00:16:54.109 Test: blockdev write zeroes read split partial ...passed 00:16:54.109 Test: blockdev reset ...[2024-07-25 05:22:58.228905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:54.109 [2024-07-25 05:22:58.229069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc27460 (9): Bad file descriptor 00:16:54.109 passed 00:16:54.109 Test: blockdev write read 8 blocks ...[2024-07-25 05:22:58.246820] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:54.109 passed 00:16:54.109 Test: blockdev write read size > 128k ...passed 00:16:54.109 Test: blockdev write read invalid size ...passed 00:16:54.367 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:54.367 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:54.367 Test: blockdev write read max offset ...passed 00:16:54.367 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:54.367 Test: blockdev writev readv 8 blocks ...passed 00:16:54.367 Test: blockdev writev readv 30 x 1block ...passed 00:16:54.367 Test: blockdev writev readv block ...passed 00:16:54.367 Test: blockdev writev readv size > 128k ...passed 00:16:54.367 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:54.367 Test: blockdev comparev and writev ...[2024-07-25 05:22:58.421576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.367 [2024-07-25 05:22:58.421642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.367 [2024-07-25 05:22:58.421672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.367 [2024-07-25 05:22:58.421689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:54.367 [2024-07-25 05:22:58.422167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.367 [2024-07-25 05:22:58.422197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:54.367 [2024-07-25 05:22:58.422227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.368 [2024-07-25 05:22:58.422243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:54.368 [2024-07-25 05:22:58.422564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.368 [2024-07-25 05:22:58.422592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:54.368 [2024-07-25 05:22:58.422621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.368 [2024-07-25 05:22:58.422637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:54.368 [2024-07-25 05:22:58.422996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.368 [2024-07-25 05:22:58.423026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:54.368 [2024-07-25 05:22:58.423055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.368 [2024-07-25 05:22:58.423071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:54.368 passed 00:16:54.368 Test: blockdev nvme passthru rw ...passed 00:16:54.368 Test: blockdev nvme passthru vendor specific ...[2024-07-25 05:22:58.506336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.368 [2024-07-25 05:22:58.506387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:54.368 [2024-07-25 05:22:58.506514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.368 [2024-07-25 05:22:58.506530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:54.368 passed 00:16:54.368 Test: blockdev nvme admin passthru ...[2024-07-25 05:22:58.506641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.368 [2024-07-25 05:22:58.506663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:54.368 [2024-07-25 05:22:58.506780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.368 [2024-07-25 05:22:58.506797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:54.368 passed 00:16:54.626 Test: blockdev copy ...passed 00:16:54.626 00:16:54.626 Run Summary: Type Total Ran Passed Failed Inactive 00:16:54.626 suites 1 1 n/a 0 0 00:16:54.626 tests 23 23 23 0 0 00:16:54.626 asserts 152 152 152 0 n/a 00:16:54.626 00:16:54.626 Elapsed time = 0.943 seconds 00:16:54.886 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.886 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.886 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:54.886 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.886 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:54.886 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:54.886 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:54.886 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:55.145 rmmod nvme_tcp 00:16:55.145 rmmod nvme_fabrics 00:16:55.145 rmmod nvme_keyring 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 82467 ']' 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 82467 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 82467 ']' 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 82467 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82467 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:16:55.145 killing process with pid 82467 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82467' 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 82467 00:16:55.145 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 82467 00:16:55.404 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:55.404 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:55.404 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:55.404 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:55.404 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:55.404 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.404 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.404 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.404 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:55.404 00:16:55.404 real 0m3.635s 00:16:55.404 user 0m13.230s 00:16:55.404 sys 0m1.386s 00:16:55.404 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:55.404 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:55.404 ************************************ 00:16:55.404 END TEST nvmf_bdevio_no_huge 00:16:55.404 ************************************ 00:16:55.404 05:22:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:55.404 05:22:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:55.404 05:22:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:55.404 05:22:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:55.671 ************************************ 00:16:55.671 START TEST nvmf_tls 00:16:55.671 ************************************ 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:55.671 * Looking for test storage... 00:16:55.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:55.671 Cannot find device "nvmf_tgt_br" 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:55.671 Cannot find device "nvmf_tgt_br2" 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:55.671 Cannot find device "nvmf_tgt_br" 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:55.671 Cannot find device "nvmf_tgt_br2" 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:55.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:55.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:55.671 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:55.672 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:55.960 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:55.960 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:55.960 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:55.960 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:55.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:16:55.960 00:16:55.960 --- 10.0.0.2 ping statistics --- 00:16:55.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.960 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:16:55.960 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:55.960 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:55.960 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:16:55.960 00:16:55.960 --- 10.0.0.3 ping statistics --- 00:16:55.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.960 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:55.960 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:55.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:16:55.961 00:16:55.961 --- 10.0.0.1 ping statistics --- 00:16:55.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.961 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=82709 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 82709 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 82709 ']' 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:55.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:55.961 05:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:55.961 [2024-07-25 05:23:00.111675] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:16:55.961 [2024-07-25 05:23:00.111781] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.219 [2024-07-25 05:23:00.252903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.219 [2024-07-25 05:23:00.327495] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.219 [2024-07-25 05:23:00.327555] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.219 [2024-07-25 05:23:00.327570] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.219 [2024-07-25 05:23:00.327579] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.219 [2024-07-25 05:23:00.327588] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.219 [2024-07-25 05:23:00.327619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.154 05:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:57.154 05:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:57.154 05:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:57.154 05:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:57.154 05:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.154 05:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.154 05:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:16:57.154 05:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:57.413 true 00:16:57.413 05:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:57.413 05:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:16:57.672 05:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:16:57.672 05:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:16:57.672 05:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:57.930 05:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:57.930 05:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:16:58.188 05:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:16:58.188 05:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:16:58.188 05:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:58.446 05:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:58.446 05:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:16:58.704 05:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:16:58.704 05:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:58.704 05:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:58.704 05:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:58.963 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:16:58.963 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:58.963 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:59.220 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:59.220 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:59.477 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:16:59.477 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:59.477 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:59.735 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:59.735 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:59.994 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:16:59.994 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:59.994 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:59.994 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:59.994 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:59.994 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:59.994 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:16:59.994 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:59.994 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.zD8smpJmBb 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.BCQf2O0WAX 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.zD8smpJmBb 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.BCQf2O0WAX 00:17:00.253 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:00.511 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:01.078 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.zD8smpJmBb 00:17:01.078 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zD8smpJmBb 00:17:01.078 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:01.078 [2024-07-25 05:23:05.203874] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:01.078 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:01.647 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:01.647 [2024-07-25 05:23:05.796031] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:01.647 [2024-07-25 05:23:05.796246] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.647 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:02.212 malloc0 00:17:02.212 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:02.470 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zD8smpJmBb 00:17:02.731 [2024-07-25 05:23:06.699149] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:02.731 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.zD8smpJmBb 00:17:14.930 Initializing NVMe Controllers 00:17:14.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:14.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:14.930 Initialization complete. Launching workers. 00:17:14.930 ======================================================== 00:17:14.930 Latency(us) 00:17:14.930 Device Information : IOPS MiB/s Average min max 00:17:14.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9305.57 36.35 6879.24 1311.20 10406.51 00:17:14.930 ======================================================== 00:17:14.930 Total : 9305.57 36.35 6879.24 1311.20 10406.51 00:17:14.930 00:17:14.930 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zD8smpJmBb 00:17:14.930 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:14.930 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:14.930 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:14.930 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zD8smpJmBb' 00:17:14.930 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:14.930 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83070 00:17:14.930 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:14.930 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:14.930 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83070 /var/tmp/bdevperf.sock 00:17:14.930 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83070 ']' 00:17:14.930 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:14.930 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:14.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:14.930 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:14.930 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:14.930 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:14.930 [2024-07-25 05:23:16.961472] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:17:14.930 [2024-07-25 05:23:16.961558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83070 ] 00:17:14.930 [2024-07-25 05:23:17.097680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.930 [2024-07-25 05:23:17.157320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.930 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:14.930 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:14.930 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zD8smpJmBb 00:17:14.930 [2024-07-25 05:23:17.493857] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:14.930 [2024-07-25 05:23:17.494019] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:14.930 TLSTESTn1 00:17:14.930 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:14.930 Running I/O for 10 seconds... 00:17:24.944 00:17:24.944 Latency(us) 00:17:24.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.944 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:24.944 Verification LBA range: start 0x0 length 0x2000 00:17:24.944 TLSTESTn1 : 10.02 3712.32 14.50 0.00 0.00 34411.05 8162.21 36461.85 00:17:24.944 =================================================================================================================== 00:17:24.944 Total : 3712.32 14.50 0.00 0.00 34411.05 8162.21 36461.85 00:17:24.944 0 00:17:24.944 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:24.944 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 83070 00:17:24.944 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83070 ']' 00:17:24.944 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83070 00:17:24.944 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:24.944 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83070 00:17:24.945 killing process with pid 83070 00:17:24.945 Received shutdown signal, test time was about 10.000000 seconds 00:17:24.945 00:17:24.945 Latency(us) 00:17:24.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.945 =================================================================================================================== 00:17:24.945 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83070' 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83070 00:17:24.945 [2024-07-25 05:23:27.766590] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83070 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BCQf2O0WAX 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BCQf2O0WAX 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:24.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BCQf2O0WAX 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BCQf2O0WAX' 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83204 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83204 /var/tmp/bdevperf.sock 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83204 ']' 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:24.945 05:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.945 [2024-07-25 05:23:27.988380] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:17:24.945 [2024-07-25 05:23:27.988487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83204 ] 00:17:24.945 [2024-07-25 05:23:28.125263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.945 [2024-07-25 05:23:28.187159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BCQf2O0WAX 00:17:24.945 [2024-07-25 05:23:28.535120] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:24.945 [2024-07-25 05:23:28.535280] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:24.945 [2024-07-25 05:23:28.542181] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:24.945 [2024-07-25 05:23:28.542598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c10ca0 (107): Transport endpoint is not connected 00:17:24.945 [2024-07-25 05:23:28.543573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c10ca0 (9): Bad file descriptor 00:17:24.945 [2024-07-25 05:23:28.544567] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:24.945 [2024-07-25 05:23:28.544622] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:24.945 [2024-07-25 05:23:28.544654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:24.945 2024/07/25 05:23:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.BCQf2O0WAX subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:24.945 request: 00:17:24.945 { 00:17:24.945 "method": "bdev_nvme_attach_controller", 00:17:24.945 "params": { 00:17:24.945 "name": "TLSTEST", 00:17:24.945 "trtype": "tcp", 00:17:24.945 "traddr": "10.0.0.2", 00:17:24.945 "adrfam": "ipv4", 00:17:24.945 "trsvcid": "4420", 00:17:24.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:24.945 "prchk_reftag": false, 00:17:24.945 "prchk_guard": false, 00:17:24.945 "hdgst": false, 00:17:24.945 "ddgst": false, 00:17:24.945 "psk": "/tmp/tmp.BCQf2O0WAX" 00:17:24.945 } 00:17:24.945 } 00:17:24.945 Got JSON-RPC error response 00:17:24.945 GoRPCClient: error on JSON-RPC call 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83204 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83204 ']' 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83204 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83204 00:17:24.945 killing process with pid 83204 00:17:24.945 Received shutdown signal, test time was about 10.000000 seconds 00:17:24.945 00:17:24.945 Latency(us) 00:17:24.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.945 =================================================================================================================== 00:17:24.945 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83204' 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83204 00:17:24.945 [2024-07-25 05:23:28.600950] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83204 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zD8smpJmBb 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zD8smpJmBb 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zD8smpJmBb 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:24.945 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zD8smpJmBb' 00:17:24.946 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:24.946 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83236 00:17:24.946 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:24.946 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:24.946 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83236 /var/tmp/bdevperf.sock 00:17:24.946 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83236 ']' 00:17:24.946 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:24.946 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:24.946 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:24.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:24.946 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:24.946 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.946 [2024-07-25 05:23:28.862696] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:17:24.946 [2024-07-25 05:23:28.862838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83236 ] 00:17:24.946 [2024-07-25 05:23:29.013491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.946 [2024-07-25 05:23:29.118519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.879 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:25.880 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:25.880 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.zD8smpJmBb 00:17:26.138 [2024-07-25 05:23:30.162410] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:26.138 [2024-07-25 05:23:30.162523] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:26.138 [2024-07-25 05:23:30.168388] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:26.138 [2024-07-25 05:23:30.168433] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:26.138 [2024-07-25 05:23:30.168491] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:26.138 [2024-07-25 05:23:30.169102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77eca0 (107): Transport endpoint is not connected 00:17:26.138 [2024-07-25 05:23:30.170092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77eca0 (9): Bad file descriptor 00:17:26.138 [2024-07-25 05:23:30.171088] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:26.138 [2024-07-25 05:23:30.171128] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:26.138 [2024-07-25 05:23:30.171143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:26.138 2024/07/25 05:23:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.zD8smpJmBb subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:26.138 request: 00:17:26.138 { 00:17:26.138 "method": "bdev_nvme_attach_controller", 00:17:26.138 "params": { 00:17:26.138 "name": "TLSTEST", 00:17:26.138 "trtype": "tcp", 00:17:26.138 "traddr": "10.0.0.2", 00:17:26.138 "adrfam": "ipv4", 00:17:26.138 "trsvcid": "4420", 00:17:26.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.138 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:26.138 "prchk_reftag": false, 00:17:26.138 "prchk_guard": false, 00:17:26.138 "hdgst": false, 00:17:26.138 "ddgst": false, 00:17:26.138 "psk": "/tmp/tmp.zD8smpJmBb" 00:17:26.138 } 00:17:26.138 } 00:17:26.138 Got JSON-RPC error response 00:17:26.138 GoRPCClient: error on JSON-RPC call 00:17:26.138 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83236 00:17:26.138 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83236 ']' 00:17:26.138 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83236 00:17:26.138 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:26.138 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:26.138 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83236 00:17:26.138 killing process with pid 83236 00:17:26.138 Received shutdown signal, test time was about 10.000000 seconds 00:17:26.138 00:17:26.138 Latency(us) 00:17:26.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.138 =================================================================================================================== 00:17:26.138 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:26.138 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:26.138 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:26.138 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83236' 00:17:26.138 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83236 00:17:26.138 [2024-07-25 05:23:30.218161] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:26.138 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83236 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zD8smpJmBb 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zD8smpJmBb 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zD8smpJmBb 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zD8smpJmBb' 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83276 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83276 /var/tmp/bdevperf.sock 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83276 ']' 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:26.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.401 [2024-07-25 05:23:30.426402] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:17:26.402 [2024-07-25 05:23:30.426497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83276 ] 00:17:26.402 [2024-07-25 05:23:30.565439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.660 [2024-07-25 05:23:30.638200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.596 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:27.596 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:27.596 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zD8smpJmBb 00:17:27.596 [2024-07-25 05:23:31.708825] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:27.596 [2024-07-25 05:23:31.708992] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:27.596 [2024-07-25 05:23:31.718694] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:27.596 [2024-07-25 05:23:31.718743] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:27.596 [2024-07-25 05:23:31.718807] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:27.596 [2024-07-25 05:23:31.719073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2065ca0 (107): Transport endpoint is not connected 00:17:27.596 [2024-07-25 05:23:31.720056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2065ca0 (9): Bad file descriptor 00:17:27.596 [2024-07-25 05:23:31.721051] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:27.596 [2024-07-25 05:23:31.721100] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:27.596 [2024-07-25 05:23:31.721124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:27.596 2024/07/25 05:23:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.zD8smpJmBb subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:27.596 request: 00:17:27.596 { 00:17:27.596 "method": "bdev_nvme_attach_controller", 00:17:27.596 "params": { 00:17:27.596 "name": "TLSTEST", 00:17:27.596 "trtype": "tcp", 00:17:27.596 "traddr": "10.0.0.2", 00:17:27.596 "adrfam": "ipv4", 00:17:27.596 "trsvcid": "4420", 00:17:27.596 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:27.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:27.596 "prchk_reftag": false, 00:17:27.596 "prchk_guard": false, 00:17:27.596 "hdgst": false, 00:17:27.596 "ddgst": false, 00:17:27.596 "psk": "/tmp/tmp.zD8smpJmBb" 00:17:27.596 } 00:17:27.596 } 00:17:27.596 Got JSON-RPC error response 00:17:27.596 GoRPCClient: error on JSON-RPC call 00:17:27.596 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83276 00:17:27.596 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83276 ']' 00:17:27.596 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83276 00:17:27.596 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:27.596 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:27.596 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83276 00:17:27.596 killing process with pid 83276 00:17:27.596 Received shutdown signal, test time was about 10.000000 seconds 00:17:27.596 00:17:27.596 Latency(us) 00:17:27.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.596 =================================================================================================================== 00:17:27.596 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:27.596 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:27.596 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:27.596 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83276' 00:17:27.596 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83276 00:17:27.596 [2024-07-25 05:23:31.762173] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:27.596 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83276 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83322 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83322 /var/tmp/bdevperf.sock 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83322 ']' 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:27.854 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.854 [2024-07-25 05:23:31.972426] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:17:27.854 [2024-07-25 05:23:31.972530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83322 ] 00:17:28.111 [2024-07-25 05:23:32.109541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.111 [2024-07-25 05:23:32.211798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.047 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:29.047 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:29.047 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:29.047 [2024-07-25 05:23:33.218803] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:29.047 [2024-07-25 05:23:33.220339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4f240 (9): Bad file descriptor 00:17:29.047 [2024-07-25 05:23:33.221334] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:29.047 [2024-07-25 05:23:33.221358] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:29.047 [2024-07-25 05:23:33.221372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:29.307 2024/07/25 05:23:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:29.307 request: 00:17:29.307 { 00:17:29.307 "method": "bdev_nvme_attach_controller", 00:17:29.307 "params": { 00:17:29.307 "name": "TLSTEST", 00:17:29.307 "trtype": "tcp", 00:17:29.307 "traddr": "10.0.0.2", 00:17:29.307 "adrfam": "ipv4", 00:17:29.307 "trsvcid": "4420", 00:17:29.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:29.307 "prchk_reftag": false, 00:17:29.307 "prchk_guard": false, 00:17:29.307 "hdgst": false, 00:17:29.307 "ddgst": false 00:17:29.307 } 00:17:29.307 } 00:17:29.307 Got JSON-RPC error response 00:17:29.307 GoRPCClient: error on JSON-RPC call 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83322 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83322 ']' 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83322 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83322 00:17:29.307 killing process with pid 83322 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83322' 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83322 00:17:29.307 Received shutdown signal, test time was about 10.000000 seconds 00:17:29.307 00:17:29.307 Latency(us) 00:17:29.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.307 =================================================================================================================== 00:17:29.307 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83322 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 82709 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 82709 ']' 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 82709 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82709 00:17:29.307 killing process with pid 82709 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82709' 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 82709 00:17:29.307 [2024-07-25 05:23:33.453235] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:29.307 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 82709 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.JFjRFTOUGt 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.JFjRFTOUGt 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83383 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83383 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83383 ']' 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:29.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:29.567 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:29.567 [2024-07-25 05:23:33.721264] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:17:29.567 [2024-07-25 05:23:33.721358] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.826 [2024-07-25 05:23:33.857085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.826 [2024-07-25 05:23:33.913799] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.826 [2024-07-25 05:23:33.913856] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.826 [2024-07-25 05:23:33.913869] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.826 [2024-07-25 05:23:33.913877] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.826 [2024-07-25 05:23:33.913884] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.826 [2024-07-25 05:23:33.913918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.826 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:29.826 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:29.826 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:29.826 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:29.826 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.085 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.085 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.JFjRFTOUGt 00:17:30.085 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JFjRFTOUGt 00:17:30.085 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:30.344 [2024-07-25 05:23:34.293473] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.344 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:30.602 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:30.861 [2024-07-25 05:23:34.837576] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:30.861 [2024-07-25 05:23:34.837786] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.861 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:31.120 malloc0 00:17:31.120 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:31.378 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JFjRFTOUGt 00:17:31.636 [2024-07-25 05:23:35.716776] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:31.636 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JFjRFTOUGt 00:17:31.636 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:31.636 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:31.636 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:31.636 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JFjRFTOUGt' 00:17:31.636 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:31.636 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83472 00:17:31.636 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:31.636 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83472 /var/tmp/bdevperf.sock 00:17:31.636 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:31.636 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83472 ']' 00:17:31.636 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.636 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:31.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.636 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.636 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:31.636 05:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.636 [2024-07-25 05:23:35.794293] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:17:31.636 [2024-07-25 05:23:35.794386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83472 ] 00:17:31.894 [2024-07-25 05:23:35.927625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.894 [2024-07-25 05:23:35.986160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.894 05:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:31.894 05:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:31.894 05:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JFjRFTOUGt 00:17:32.152 [2024-07-25 05:23:36.288516] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:32.152 [2024-07-25 05:23:36.288622] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:32.410 TLSTESTn1 00:17:32.410 05:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:32.410 Running I/O for 10 seconds... 00:17:42.378 00:17:42.378 Latency(us) 00:17:42.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.378 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:42.378 Verification LBA range: start 0x0 length 0x2000 00:17:42.378 TLSTESTn1 : 10.02 3831.64 14.97 0.00 0.00 33341.74 6464.23 39321.60 00:17:42.378 =================================================================================================================== 00:17:42.378 Total : 3831.64 14.97 0.00 0.00 33341.74 6464.23 39321.60 00:17:42.378 0 00:17:42.378 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:42.378 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 83472 00:17:42.378 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83472 ']' 00:17:42.378 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83472 00:17:42.378 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:42.378 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:42.378 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83472 00:17:42.378 killing process with pid 83472 00:17:42.378 Received shutdown signal, test time was about 10.000000 seconds 00:17:42.378 00:17:42.378 Latency(us) 00:17:42.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.378 =================================================================================================================== 00:17:42.378 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.378 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:42.378 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:42.378 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83472' 00:17:42.378 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83472 00:17:42.378 [2024-07-25 05:23:46.552382] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:42.378 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83472 00:17:42.636 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.JFjRFTOUGt 00:17:42.636 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JFjRFTOUGt 00:17:42.636 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:42.636 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JFjRFTOUGt 00:17:42.636 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:42.636 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.636 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:42.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.636 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.636 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JFjRFTOUGt 00:17:42.636 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:42.636 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:42.636 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:42.637 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JFjRFTOUGt' 00:17:42.637 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:42.637 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83606 00:17:42.637 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:42.637 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:42.637 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83606 /var/tmp/bdevperf.sock 00:17:42.637 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83606 ']' 00:17:42.637 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.637 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:42.637 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.637 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:42.637 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.637 [2024-07-25 05:23:46.768620] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:17:42.637 [2024-07-25 05:23:46.768706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83606 ] 00:17:42.895 [2024-07-25 05:23:46.903141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.895 [2024-07-25 05:23:46.974335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.895 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:42.895 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:42.895 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JFjRFTOUGt 00:17:43.152 [2024-07-25 05:23:47.297222] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:43.152 [2024-07-25 05:23:47.297304] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:43.152 [2024-07-25 05:23:47.297317] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.JFjRFTOUGt 00:17:43.152 2024/07/25 05:23:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.JFjRFTOUGt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:17:43.152 request: 00:17:43.152 { 00:17:43.152 "method": "bdev_nvme_attach_controller", 00:17:43.152 "params": { 00:17:43.152 "name": "TLSTEST", 00:17:43.152 "trtype": "tcp", 00:17:43.152 "traddr": "10.0.0.2", 00:17:43.152 "adrfam": "ipv4", 00:17:43.152 "trsvcid": "4420", 00:17:43.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.152 "prchk_reftag": false, 00:17:43.152 "prchk_guard": false, 00:17:43.152 "hdgst": false, 00:17:43.152 "ddgst": false, 00:17:43.152 "psk": "/tmp/tmp.JFjRFTOUGt" 00:17:43.152 } 00:17:43.152 } 00:17:43.152 Got JSON-RPC error response 00:17:43.153 GoRPCClient: error on JSON-RPC call 00:17:43.153 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83606 00:17:43.153 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83606 ']' 00:17:43.153 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83606 00:17:43.153 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:43.153 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83606 00:17:43.411 killing process with pid 83606 00:17:43.411 Received shutdown signal, test time was about 10.000000 seconds 00:17:43.411 00:17:43.411 Latency(us) 00:17:43.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.411 =================================================================================================================== 00:17:43.411 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83606' 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83606 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83606 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 83383 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83383 ']' 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83383 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83383 00:17:43.411 killing process with pid 83383 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83383' 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83383 00:17:43.411 [2024-07-25 05:23:47.528687] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:43.411 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83383 00:17:43.670 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:43.670 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:43.670 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:43.670 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.670 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83638 00:17:43.670 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83638 00:17:43.670 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83638 ']' 00:17:43.670 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.670 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:43.670 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.670 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:43.670 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.670 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:43.670 [2024-07-25 05:23:47.756484] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:17:43.670 [2024-07-25 05:23:47.756589] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.929 [2024-07-25 05:23:47.894719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.929 [2024-07-25 05:23:47.978927] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.929 [2024-07-25 05:23:47.979005] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.929 [2024-07-25 05:23:47.979026] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.929 [2024-07-25 05:23:47.979044] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.929 [2024-07-25 05:23:47.979059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.929 [2024-07-25 05:23:47.979106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.896 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:44.896 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:44.896 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:44.896 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:44.896 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.896 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.896 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.JFjRFTOUGt 00:17:44.896 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:44.896 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.JFjRFTOUGt 00:17:44.896 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:44.896 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.896 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:44.896 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.896 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.JFjRFTOUGt 00:17:44.896 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JFjRFTOUGt 00:17:44.896 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:44.896 [2024-07-25 05:23:49.060238] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.154 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:45.154 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:45.413 [2024-07-25 05:23:49.540350] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:45.413 [2024-07-25 05:23:49.540552] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.413 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:45.672 malloc0 00:17:45.672 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:45.930 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JFjRFTOUGt 00:17:46.189 [2024-07-25 05:23:50.247144] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:46.189 [2024-07-25 05:23:50.247186] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:46.189 [2024-07-25 05:23:50.247219] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:46.189 2024/07/25 05:23:50 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.JFjRFTOUGt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:46.189 request: 00:17:46.189 { 00:17:46.189 "method": "nvmf_subsystem_add_host", 00:17:46.189 "params": { 00:17:46.189 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.189 "host": "nqn.2016-06.io.spdk:host1", 00:17:46.189 "psk": "/tmp/tmp.JFjRFTOUGt" 00:17:46.189 } 00:17:46.189 } 00:17:46.189 Got JSON-RPC error response 00:17:46.189 GoRPCClient: error on JSON-RPC call 00:17:46.189 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:46.189 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:46.189 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:46.189 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:46.189 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 83638 00:17:46.189 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83638 ']' 00:17:46.189 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83638 00:17:46.189 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:46.189 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:46.189 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83638 00:17:46.189 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:46.189 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:46.189 killing process with pid 83638 00:17:46.189 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83638' 00:17:46.189 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83638 00:17:46.189 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83638 00:17:46.447 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.JFjRFTOUGt 00:17:46.447 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:46.447 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:46.447 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:46.447 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.448 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83748 00:17:46.448 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:46.448 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83748 00:17:46.448 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83748 ']' 00:17:46.448 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.448 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:46.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.448 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.448 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:46.448 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.448 [2024-07-25 05:23:50.525115] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:17:46.448 [2024-07-25 05:23:50.525214] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.705 [2024-07-25 05:23:50.660686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.705 [2024-07-25 05:23:50.717791] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.705 [2024-07-25 05:23:50.717845] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.705 [2024-07-25 05:23:50.717856] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.705 [2024-07-25 05:23:50.717864] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.705 [2024-07-25 05:23:50.717872] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.705 [2024-07-25 05:23:50.717898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.641 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:47.641 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:47.641 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.641 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:47.641 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.641 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.641 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.JFjRFTOUGt 00:17:47.641 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JFjRFTOUGt 00:17:47.641 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:47.641 [2024-07-25 05:23:51.774086] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.641 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:48.208 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:48.208 [2024-07-25 05:23:52.374211] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:48.208 [2024-07-25 05:23:52.374420] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.466 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:48.724 malloc0 00:17:48.724 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:48.982 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JFjRFTOUGt 00:17:49.243 [2024-07-25 05:23:53.269119] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:49.243 05:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=83856 00:17:49.243 05:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:49.243 05:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:49.243 05:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 83856 /var/tmp/bdevperf.sock 00:17:49.243 05:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83856 ']' 00:17:49.243 05:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.243 05:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:49.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.243 05:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.243 05:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:49.243 05:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.243 [2024-07-25 05:23:53.351993] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:17:49.243 [2024-07-25 05:23:53.352106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83856 ] 00:17:49.501 [2024-07-25 05:23:53.486557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.501 [2024-07-25 05:23:53.546122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.501 05:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:49.501 05:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:49.501 05:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JFjRFTOUGt 00:17:49.759 [2024-07-25 05:23:53.883617] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:49.759 [2024-07-25 05:23:53.883720] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:50.017 TLSTESTn1 00:17:50.017 05:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:50.276 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:17:50.276 "subsystems": [ 00:17:50.276 { 00:17:50.276 "subsystem": "keyring", 00:17:50.276 "config": [] 00:17:50.276 }, 00:17:50.276 { 00:17:50.276 "subsystem": "iobuf", 00:17:50.276 "config": [ 00:17:50.276 { 00:17:50.276 "method": "iobuf_set_options", 00:17:50.276 "params": { 00:17:50.276 "large_bufsize": 135168, 00:17:50.276 "large_pool_count": 1024, 00:17:50.276 "small_bufsize": 8192, 00:17:50.276 "small_pool_count": 8192 00:17:50.276 } 00:17:50.276 } 00:17:50.276 ] 00:17:50.276 }, 00:17:50.276 { 00:17:50.276 "subsystem": "sock", 00:17:50.276 "config": [ 00:17:50.276 { 00:17:50.276 "method": "sock_set_default_impl", 00:17:50.276 "params": { 00:17:50.276 "impl_name": "posix" 00:17:50.276 } 00:17:50.276 }, 00:17:50.276 { 00:17:50.276 "method": "sock_impl_set_options", 00:17:50.276 "params": { 00:17:50.276 "enable_ktls": false, 00:17:50.276 "enable_placement_id": 0, 00:17:50.276 "enable_quickack": false, 00:17:50.276 "enable_recv_pipe": true, 00:17:50.276 "enable_zerocopy_send_client": false, 00:17:50.276 "enable_zerocopy_send_server": true, 00:17:50.276 "impl_name": "ssl", 00:17:50.276 "recv_buf_size": 4096, 00:17:50.276 "send_buf_size": 4096, 00:17:50.276 "tls_version": 0, 00:17:50.276 "zerocopy_threshold": 0 00:17:50.276 } 00:17:50.276 }, 00:17:50.276 { 00:17:50.276 "method": "sock_impl_set_options", 00:17:50.276 "params": { 00:17:50.276 "enable_ktls": false, 00:17:50.276 "enable_placement_id": 0, 00:17:50.276 "enable_quickack": false, 00:17:50.276 "enable_recv_pipe": true, 00:17:50.276 "enable_zerocopy_send_client": false, 00:17:50.276 "enable_zerocopy_send_server": true, 00:17:50.276 "impl_name": "posix", 00:17:50.276 "recv_buf_size": 2097152, 00:17:50.276 "send_buf_size": 2097152, 00:17:50.276 "tls_version": 0, 00:17:50.276 "zerocopy_threshold": 0 00:17:50.276 } 00:17:50.276 } 00:17:50.276 ] 00:17:50.276 }, 00:17:50.276 { 00:17:50.276 "subsystem": "vmd", 00:17:50.276 "config": [] 00:17:50.276 }, 00:17:50.276 { 00:17:50.276 "subsystem": "accel", 00:17:50.276 "config": [ 00:17:50.276 { 00:17:50.276 "method": "accel_set_options", 00:17:50.276 "params": { 00:17:50.276 "buf_count": 2048, 00:17:50.276 "large_cache_size": 16, 00:17:50.276 "sequence_count": 2048, 00:17:50.276 "small_cache_size": 128, 00:17:50.276 "task_count": 2048 00:17:50.276 } 00:17:50.276 } 00:17:50.276 ] 00:17:50.276 }, 00:17:50.276 { 00:17:50.276 "subsystem": "bdev", 00:17:50.276 "config": [ 00:17:50.276 { 00:17:50.276 "method": "bdev_set_options", 00:17:50.276 "params": { 00:17:50.276 "bdev_auto_examine": true, 00:17:50.276 "bdev_io_cache_size": 256, 00:17:50.276 "bdev_io_pool_size": 65535, 00:17:50.276 "iobuf_large_cache_size": 16, 00:17:50.276 "iobuf_small_cache_size": 128 00:17:50.276 } 00:17:50.276 }, 00:17:50.276 { 00:17:50.276 "method": "bdev_raid_set_options", 00:17:50.276 "params": { 00:17:50.276 "process_max_bandwidth_mb_sec": 0, 00:17:50.276 "process_window_size_kb": 1024 00:17:50.276 } 00:17:50.276 }, 00:17:50.276 { 00:17:50.276 "method": "bdev_iscsi_set_options", 00:17:50.276 "params": { 00:17:50.276 "timeout_sec": 30 00:17:50.276 } 00:17:50.276 }, 00:17:50.276 { 00:17:50.276 "method": "bdev_nvme_set_options", 00:17:50.276 "params": { 00:17:50.276 "action_on_timeout": "none", 00:17:50.276 "allow_accel_sequence": false, 00:17:50.276 "arbitration_burst": 0, 00:17:50.276 "bdev_retry_count": 3, 00:17:50.276 "ctrlr_loss_timeout_sec": 0, 00:17:50.276 "delay_cmd_submit": true, 00:17:50.276 "dhchap_dhgroups": [ 00:17:50.276 "null", 00:17:50.276 "ffdhe2048", 00:17:50.276 "ffdhe3072", 00:17:50.276 "ffdhe4096", 00:17:50.276 "ffdhe6144", 00:17:50.276 "ffdhe8192" 00:17:50.276 ], 00:17:50.276 "dhchap_digests": [ 00:17:50.276 "sha256", 00:17:50.276 "sha384", 00:17:50.276 "sha512" 00:17:50.276 ], 00:17:50.276 "disable_auto_failback": false, 00:17:50.276 "fast_io_fail_timeout_sec": 0, 00:17:50.276 "generate_uuids": false, 00:17:50.276 "high_priority_weight": 0, 00:17:50.276 "io_path_stat": false, 00:17:50.276 "io_queue_requests": 0, 00:17:50.277 "keep_alive_timeout_ms": 10000, 00:17:50.277 "low_priority_weight": 0, 00:17:50.277 "medium_priority_weight": 0, 00:17:50.277 "nvme_adminq_poll_period_us": 10000, 00:17:50.277 "nvme_error_stat": false, 00:17:50.277 "nvme_ioq_poll_period_us": 0, 00:17:50.277 "rdma_cm_event_timeout_ms": 0, 00:17:50.277 "rdma_max_cq_size": 0, 00:17:50.277 "rdma_srq_size": 0, 00:17:50.277 "reconnect_delay_sec": 0, 00:17:50.277 "timeout_admin_us": 0, 00:17:50.277 "timeout_us": 0, 00:17:50.277 "transport_ack_timeout": 0, 00:17:50.277 "transport_retry_count": 4, 00:17:50.277 "transport_tos": 0 00:17:50.277 } 00:17:50.277 }, 00:17:50.277 { 00:17:50.277 "method": "bdev_nvme_set_hotplug", 00:17:50.277 "params": { 00:17:50.277 "enable": false, 00:17:50.277 "period_us": 100000 00:17:50.277 } 00:17:50.277 }, 00:17:50.277 { 00:17:50.277 "method": "bdev_malloc_create", 00:17:50.277 "params": { 00:17:50.277 "block_size": 4096, 00:17:50.277 "dif_is_head_of_md": false, 00:17:50.277 "dif_pi_format": 0, 00:17:50.277 "dif_type": 0, 00:17:50.277 "md_size": 0, 00:17:50.277 "name": "malloc0", 00:17:50.277 "num_blocks": 8192, 00:17:50.277 "optimal_io_boundary": 0, 00:17:50.277 "physical_block_size": 4096, 00:17:50.277 "uuid": "c8fa9d5e-282c-4c74-b53a-6d825faacd56" 00:17:50.277 } 00:17:50.277 }, 00:17:50.277 { 00:17:50.277 "method": "bdev_wait_for_examine" 00:17:50.277 } 00:17:50.277 ] 00:17:50.277 }, 00:17:50.277 { 00:17:50.277 "subsystem": "nbd", 00:17:50.277 "config": [] 00:17:50.277 }, 00:17:50.277 { 00:17:50.277 "subsystem": "scheduler", 00:17:50.277 "config": [ 00:17:50.277 { 00:17:50.277 "method": "framework_set_scheduler", 00:17:50.277 "params": { 00:17:50.277 "name": "static" 00:17:50.277 } 00:17:50.277 } 00:17:50.277 ] 00:17:50.277 }, 00:17:50.277 { 00:17:50.277 "subsystem": "nvmf", 00:17:50.277 "config": [ 00:17:50.277 { 00:17:50.277 "method": "nvmf_set_config", 00:17:50.277 "params": { 00:17:50.277 "admin_cmd_passthru": { 00:17:50.277 "identify_ctrlr": false 00:17:50.277 }, 00:17:50.277 "discovery_filter": "match_any" 00:17:50.277 } 00:17:50.277 }, 00:17:50.277 { 00:17:50.277 "method": "nvmf_set_max_subsystems", 00:17:50.277 "params": { 00:17:50.277 "max_subsystems": 1024 00:17:50.277 } 00:17:50.277 }, 00:17:50.277 { 00:17:50.277 "method": "nvmf_set_crdt", 00:17:50.277 "params": { 00:17:50.277 "crdt1": 0, 00:17:50.277 "crdt2": 0, 00:17:50.277 "crdt3": 0 00:17:50.277 } 00:17:50.277 }, 00:17:50.277 { 00:17:50.277 "method": "nvmf_create_transport", 00:17:50.277 "params": { 00:17:50.277 "abort_timeout_sec": 1, 00:17:50.277 "ack_timeout": 0, 00:17:50.277 "buf_cache_size": 4294967295, 00:17:50.277 "c2h_success": false, 00:17:50.277 "data_wr_pool_size": 0, 00:17:50.277 "dif_insert_or_strip": false, 00:17:50.277 "in_capsule_data_size": 4096, 00:17:50.277 "io_unit_size": 131072, 00:17:50.277 "max_aq_depth": 128, 00:17:50.277 "max_io_qpairs_per_ctrlr": 127, 00:17:50.277 "max_io_size": 131072, 00:17:50.277 "max_queue_depth": 128, 00:17:50.277 "num_shared_buffers": 511, 00:17:50.277 "sock_priority": 0, 00:17:50.277 "trtype": "TCP", 00:17:50.277 "zcopy": false 00:17:50.277 } 00:17:50.277 }, 00:17:50.277 { 00:17:50.277 "method": "nvmf_create_subsystem", 00:17:50.277 "params": { 00:17:50.277 "allow_any_host": false, 00:17:50.277 "ana_reporting": false, 00:17:50.277 "max_cntlid": 65519, 00:17:50.277 "max_namespaces": 10, 00:17:50.277 "min_cntlid": 1, 00:17:50.277 "model_number": "SPDK bdev Controller", 00:17:50.277 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.277 "serial_number": "SPDK00000000000001" 00:17:50.277 } 00:17:50.277 }, 00:17:50.277 { 00:17:50.277 "method": "nvmf_subsystem_add_host", 00:17:50.277 "params": { 00:17:50.277 "host": "nqn.2016-06.io.spdk:host1", 00:17:50.277 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.277 "psk": "/tmp/tmp.JFjRFTOUGt" 00:17:50.277 } 00:17:50.277 }, 00:17:50.277 { 00:17:50.277 "method": "nvmf_subsystem_add_ns", 00:17:50.277 "params": { 00:17:50.277 "namespace": { 00:17:50.277 "bdev_name": "malloc0", 00:17:50.277 "nguid": "C8FA9D5E282C4C74B53A6D825FAACD56", 00:17:50.277 "no_auto_visible": false, 00:17:50.277 "nsid": 1, 00:17:50.277 "uuid": "c8fa9d5e-282c-4c74-b53a-6d825faacd56" 00:17:50.277 }, 00:17:50.277 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:50.277 } 00:17:50.277 }, 00:17:50.277 { 00:17:50.277 "method": "nvmf_subsystem_add_listener", 00:17:50.277 "params": { 00:17:50.277 "listen_address": { 00:17:50.277 "adrfam": "IPv4", 00:17:50.277 "traddr": "10.0.0.2", 00:17:50.277 "trsvcid": "4420", 00:17:50.277 "trtype": "TCP" 00:17:50.277 }, 00:17:50.277 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.277 "secure_channel": true 00:17:50.277 } 00:17:50.277 } 00:17:50.277 ] 00:17:50.277 } 00:17:50.277 ] 00:17:50.277 }' 00:17:50.277 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:50.536 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:50.536 "subsystems": [ 00:17:50.536 { 00:17:50.536 "subsystem": "keyring", 00:17:50.536 "config": [] 00:17:50.536 }, 00:17:50.536 { 00:17:50.536 "subsystem": "iobuf", 00:17:50.536 "config": [ 00:17:50.536 { 00:17:50.536 "method": "iobuf_set_options", 00:17:50.536 "params": { 00:17:50.536 "large_bufsize": 135168, 00:17:50.536 "large_pool_count": 1024, 00:17:50.536 "small_bufsize": 8192, 00:17:50.536 "small_pool_count": 8192 00:17:50.536 } 00:17:50.536 } 00:17:50.536 ] 00:17:50.536 }, 00:17:50.536 { 00:17:50.536 "subsystem": "sock", 00:17:50.536 "config": [ 00:17:50.537 { 00:17:50.537 "method": "sock_set_default_impl", 00:17:50.537 "params": { 00:17:50.537 "impl_name": "posix" 00:17:50.537 } 00:17:50.537 }, 00:17:50.537 { 00:17:50.537 "method": "sock_impl_set_options", 00:17:50.537 "params": { 00:17:50.537 "enable_ktls": false, 00:17:50.537 "enable_placement_id": 0, 00:17:50.537 "enable_quickack": false, 00:17:50.537 "enable_recv_pipe": true, 00:17:50.537 "enable_zerocopy_send_client": false, 00:17:50.537 "enable_zerocopy_send_server": true, 00:17:50.537 "impl_name": "ssl", 00:17:50.537 "recv_buf_size": 4096, 00:17:50.537 "send_buf_size": 4096, 00:17:50.537 "tls_version": 0, 00:17:50.537 "zerocopy_threshold": 0 00:17:50.537 } 00:17:50.537 }, 00:17:50.537 { 00:17:50.537 "method": "sock_impl_set_options", 00:17:50.537 "params": { 00:17:50.537 "enable_ktls": false, 00:17:50.537 "enable_placement_id": 0, 00:17:50.537 "enable_quickack": false, 00:17:50.537 "enable_recv_pipe": true, 00:17:50.537 "enable_zerocopy_send_client": false, 00:17:50.537 "enable_zerocopy_send_server": true, 00:17:50.537 "impl_name": "posix", 00:17:50.537 "recv_buf_size": 2097152, 00:17:50.537 "send_buf_size": 2097152, 00:17:50.537 "tls_version": 0, 00:17:50.537 "zerocopy_threshold": 0 00:17:50.537 } 00:17:50.537 } 00:17:50.537 ] 00:17:50.537 }, 00:17:50.537 { 00:17:50.537 "subsystem": "vmd", 00:17:50.537 "config": [] 00:17:50.537 }, 00:17:50.537 { 00:17:50.537 "subsystem": "accel", 00:17:50.537 "config": [ 00:17:50.537 { 00:17:50.537 "method": "accel_set_options", 00:17:50.537 "params": { 00:17:50.537 "buf_count": 2048, 00:17:50.537 "large_cache_size": 16, 00:17:50.537 "sequence_count": 2048, 00:17:50.537 "small_cache_size": 128, 00:17:50.537 "task_count": 2048 00:17:50.537 } 00:17:50.537 } 00:17:50.537 ] 00:17:50.537 }, 00:17:50.537 { 00:17:50.537 "subsystem": "bdev", 00:17:50.537 "config": [ 00:17:50.537 { 00:17:50.537 "method": "bdev_set_options", 00:17:50.537 "params": { 00:17:50.537 "bdev_auto_examine": true, 00:17:50.537 "bdev_io_cache_size": 256, 00:17:50.537 "bdev_io_pool_size": 65535, 00:17:50.537 "iobuf_large_cache_size": 16, 00:17:50.537 "iobuf_small_cache_size": 128 00:17:50.537 } 00:17:50.537 }, 00:17:50.537 { 00:17:50.537 "method": "bdev_raid_set_options", 00:17:50.537 "params": { 00:17:50.537 "process_max_bandwidth_mb_sec": 0, 00:17:50.537 "process_window_size_kb": 1024 00:17:50.537 } 00:17:50.537 }, 00:17:50.537 { 00:17:50.537 "method": "bdev_iscsi_set_options", 00:17:50.537 "params": { 00:17:50.537 "timeout_sec": 30 00:17:50.537 } 00:17:50.537 }, 00:17:50.537 { 00:17:50.537 "method": "bdev_nvme_set_options", 00:17:50.537 "params": { 00:17:50.537 "action_on_timeout": "none", 00:17:50.537 "allow_accel_sequence": false, 00:17:50.537 "arbitration_burst": 0, 00:17:50.537 "bdev_retry_count": 3, 00:17:50.537 "ctrlr_loss_timeout_sec": 0, 00:17:50.537 "delay_cmd_submit": true, 00:17:50.537 "dhchap_dhgroups": [ 00:17:50.537 "null", 00:17:50.537 "ffdhe2048", 00:17:50.537 "ffdhe3072", 00:17:50.537 "ffdhe4096", 00:17:50.537 "ffdhe6144", 00:17:50.537 "ffdhe8192" 00:17:50.537 ], 00:17:50.537 "dhchap_digests": [ 00:17:50.537 "sha256", 00:17:50.537 "sha384", 00:17:50.537 "sha512" 00:17:50.537 ], 00:17:50.537 "disable_auto_failback": false, 00:17:50.537 "fast_io_fail_timeout_sec": 0, 00:17:50.537 "generate_uuids": false, 00:17:50.537 "high_priority_weight": 0, 00:17:50.537 "io_path_stat": false, 00:17:50.537 "io_queue_requests": 512, 00:17:50.537 "keep_alive_timeout_ms": 10000, 00:17:50.537 "low_priority_weight": 0, 00:17:50.537 "medium_priority_weight": 0, 00:17:50.537 "nvme_adminq_poll_period_us": 10000, 00:17:50.537 "nvme_error_stat": false, 00:17:50.537 "nvme_ioq_poll_period_us": 0, 00:17:50.537 "rdma_cm_event_timeout_ms": 0, 00:17:50.537 "rdma_max_cq_size": 0, 00:17:50.537 "rdma_srq_size": 0, 00:17:50.537 "reconnect_delay_sec": 0, 00:17:50.537 "timeout_admin_us": 0, 00:17:50.537 "timeout_us": 0, 00:17:50.537 "transport_ack_timeout": 0, 00:17:50.537 "transport_retry_count": 4, 00:17:50.537 "transport_tos": 0 00:17:50.537 } 00:17:50.537 }, 00:17:50.537 { 00:17:50.537 "method": "bdev_nvme_attach_controller", 00:17:50.537 "params": { 00:17:50.537 "adrfam": "IPv4", 00:17:50.537 "ctrlr_loss_timeout_sec": 0, 00:17:50.537 "ddgst": false, 00:17:50.537 "fast_io_fail_timeout_sec": 0, 00:17:50.537 "hdgst": false, 00:17:50.537 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:50.537 "name": "TLSTEST", 00:17:50.537 "prchk_guard": false, 00:17:50.537 "prchk_reftag": false, 00:17:50.537 "psk": "/tmp/tmp.JFjRFTOUGt", 00:17:50.537 "reconnect_delay_sec": 0, 00:17:50.537 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.537 "traddr": "10.0.0.2", 00:17:50.537 "trsvcid": "4420", 00:17:50.537 "trtype": "TCP" 00:17:50.537 } 00:17:50.537 }, 00:17:50.537 { 00:17:50.537 "method": "bdev_nvme_set_hotplug", 00:17:50.537 "params": { 00:17:50.537 "enable": false, 00:17:50.537 "period_us": 100000 00:17:50.537 } 00:17:50.537 }, 00:17:50.537 { 00:17:50.537 "method": "bdev_wait_for_examine" 00:17:50.537 } 00:17:50.537 ] 00:17:50.537 }, 00:17:50.537 { 00:17:50.537 "subsystem": "nbd", 00:17:50.537 "config": [] 00:17:50.537 } 00:17:50.537 ] 00:17:50.537 }' 00:17:50.537 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 83856 00:17:50.537 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83856 ']' 00:17:50.537 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83856 00:17:50.537 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:50.537 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:50.537 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83856 00:17:50.537 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:50.537 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:50.537 killing process with pid 83856 00:17:50.537 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83856' 00:17:50.537 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83856 00:17:50.537 Received shutdown signal, test time was about 10.000000 seconds 00:17:50.537 00:17:50.537 Latency(us) 00:17:50.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.537 =================================================================================================================== 00:17:50.537 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:50.537 [2024-07-25 05:23:54.662925] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:50.537 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83856 00:17:50.797 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 83748 00:17:50.797 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83748 ']' 00:17:50.797 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83748 00:17:50.797 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:50.797 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:50.797 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83748 00:17:50.797 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:50.797 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:50.797 killing process with pid 83748 00:17:50.797 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83748' 00:17:50.797 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83748 00:17:50.797 [2024-07-25 05:23:54.847565] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:50.797 05:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83748 00:17:51.056 05:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:51.056 05:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:51.056 05:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:17:51.056 "subsystems": [ 00:17:51.057 { 00:17:51.057 "subsystem": "keyring", 00:17:51.057 "config": [] 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "subsystem": "iobuf", 00:17:51.057 "config": [ 00:17:51.057 { 00:17:51.057 "method": "iobuf_set_options", 00:17:51.057 "params": { 00:17:51.057 "large_bufsize": 135168, 00:17:51.057 "large_pool_count": 1024, 00:17:51.057 "small_bufsize": 8192, 00:17:51.057 "small_pool_count": 8192 00:17:51.057 } 00:17:51.057 } 00:17:51.057 ] 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "subsystem": "sock", 00:17:51.057 "config": [ 00:17:51.057 { 00:17:51.057 "method": "sock_set_default_impl", 00:17:51.057 "params": { 00:17:51.057 "impl_name": "posix" 00:17:51.057 } 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "method": "sock_impl_set_options", 00:17:51.057 "params": { 00:17:51.057 "enable_ktls": false, 00:17:51.057 "enable_placement_id": 0, 00:17:51.057 "enable_quickack": false, 00:17:51.057 "enable_recv_pipe": true, 00:17:51.057 "enable_zerocopy_send_client": false, 00:17:51.057 "enable_zerocopy_send_server": true, 00:17:51.057 "impl_name": "ssl", 00:17:51.057 "recv_buf_size": 4096, 00:17:51.057 "send_buf_size": 4096, 00:17:51.057 "tls_version": 0, 00:17:51.057 "zerocopy_threshold": 0 00:17:51.057 } 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "method": "sock_impl_set_options", 00:17:51.057 "params": { 00:17:51.057 "enable_ktls": false, 00:17:51.057 "enable_placement_id": 0, 00:17:51.057 "enable_quickack": false, 00:17:51.057 "enable_recv_pipe": true, 00:17:51.057 "enable_zerocopy_send_client": false, 00:17:51.057 "enable_zerocopy_send_server": true, 00:17:51.057 "impl_name": "posix", 00:17:51.057 "recv_buf_size": 2097152, 00:17:51.057 "send_buf_size": 2097152, 00:17:51.057 "tls_version": 0, 00:17:51.057 "zerocopy_threshold": 0 00:17:51.057 } 00:17:51.057 } 00:17:51.057 ] 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "subsystem": "vmd", 00:17:51.057 "config": [] 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "subsystem": "accel", 00:17:51.057 "config": [ 00:17:51.057 { 00:17:51.057 "method": "accel_set_options", 00:17:51.057 "params": { 00:17:51.057 "buf_count": 2048, 00:17:51.057 "large_cache_size": 16, 00:17:51.057 "sequence_count": 2048, 00:17:51.057 "small_cache_size": 128, 00:17:51.057 "task_count": 2048 00:17:51.057 } 00:17:51.057 } 00:17:51.057 ] 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "subsystem": "bdev", 00:17:51.057 "config": [ 00:17:51.057 { 00:17:51.057 "method": "bdev_set_options", 00:17:51.057 "params": { 00:17:51.057 "bdev_auto_examine": true, 00:17:51.057 "bdev_io_cache_size": 256, 00:17:51.057 "bdev_io_pool_size": 65535, 00:17:51.057 "iobuf_large_cache_size": 16, 00:17:51.057 "iobuf_small_cache_size": 128 00:17:51.057 } 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "method": "bdev_raid_set_options", 00:17:51.057 "params": { 00:17:51.057 "process_max_bandwidth_mb_sec": 0, 00:17:51.057 "process_window_size_kb": 1024 00:17:51.057 } 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "method": "bdev_iscsi_set_options", 00:17:51.057 "params": { 00:17:51.057 "timeout_sec": 30 00:17:51.057 } 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "method": "bdev_nvme_set_options", 00:17:51.057 "params": { 00:17:51.057 "action_on_timeout": "none", 00:17:51.057 "allow_accel_sequence": false, 00:17:51.057 "arbitration_burst": 0, 00:17:51.057 "bdev_retry_count": 3, 00:17:51.057 "ctrlr_loss_timeout_sec": 0, 00:17:51.057 "delay_cmd_submit": true, 00:17:51.057 "dhchap_dhgroups": [ 00:17:51.057 "null", 00:17:51.057 "ffdhe2048", 00:17:51.057 "ffdhe3072", 00:17:51.057 "ffdhe4096", 00:17:51.057 "ffdhe6144", 00:17:51.057 "ffdhe8192" 00:17:51.057 ], 00:17:51.057 "dhchap_digests": [ 00:17:51.057 "sha256", 00:17:51.057 "sha384", 00:17:51.057 "sha512" 00:17:51.057 ], 00:17:51.057 "disable_auto_failback": false, 00:17:51.057 "fast_io_fail_timeout_sec": 0, 00:17:51.057 "generate_uuids": false, 00:17:51.057 "high_priority_weight": 0, 00:17:51.057 "io_path_stat": false, 00:17:51.057 "io_queue_requests": 0, 00:17:51.057 "keep_alive_timeout_ms": 10000, 00:17:51.057 "low_priority_weight": 0, 00:17:51.057 "medium_priority_weight": 0, 00:17:51.057 "nvme_adminq_poll_period_us": 10000, 00:17:51.057 "nvme_error_stat": false, 00:17:51.057 "nvme_ioq_poll_period_us": 0, 00:17:51.057 "rdma_cm_event_timeout_ms": 0, 00:17:51.057 "rdma_max_cq_size": 0, 00:17:51.057 "rdma_srq_size": 0, 00:17:51.057 "reconnect_delay_sec": 0, 00:17:51.057 "timeout_admin_us": 0, 00:17:51.057 "timeout_us": 0, 00:17:51.057 "transport_ack_timeout": 0, 00:17:51.057 "transport_retry_count": 4, 00:17:51.057 "transport_tos": 0 00:17:51.057 } 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "method": "bdev_nvme_set_hotplug", 00:17:51.057 "params": { 00:17:51.057 "enable": false, 00:17:51.057 "period_us": 100000 00:17:51.057 } 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "method": "bdev_malloc_create", 00:17:51.057 "params": { 00:17:51.057 "block_size": 4096, 00:17:51.057 "dif_is_head_of_md": false, 00:17:51.057 "dif_pi_format": 0, 00:17:51.057 "dif_type": 0, 00:17:51.057 "md_size": 0, 00:17:51.057 "name": "malloc0", 00:17:51.057 "num_blocks": 8192, 00:17:51.057 "optimal_io_boundary": 0, 00:17:51.057 "physical_block_size": 4096, 00:17:51.057 "uuid": "c8fa9d5e-282c-4c74-b53a-6d825faacd56" 00:17:51.057 } 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "method": "bdev_wait_for_examine" 00:17:51.057 } 00:17:51.057 ] 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "subsystem": "nbd", 00:17:51.057 "config": [] 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "subsystem": "scheduler", 00:17:51.057 "config": [ 00:17:51.057 { 00:17:51.057 "method": "framework_set_scheduler", 00:17:51.057 "params": { 00:17:51.057 "name": "static" 00:17:51.057 } 00:17:51.057 } 00:17:51.057 ] 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "subsystem": "nvmf", 00:17:51.057 "config": [ 00:17:51.057 { 00:17:51.057 "method": "nvmf_set_config", 00:17:51.057 "params": { 00:17:51.057 "admin_cmd_passthru": { 00:17:51.057 "identify_ctrlr": false 00:17:51.057 }, 00:17:51.057 "discovery_filter": "match_any" 00:17:51.057 } 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "method": "nvmf_set_max_subsystems", 00:17:51.057 "params": { 00:17:51.057 "max_subsystems": 1024 00:17:51.057 } 00:17:51.057 }, 00:17:51.057 { 00:17:51.057 "method": "nvmf_set_crdt", 00:17:51.057 "params": { 00:17:51.057 "crdt1": 0, 00:17:51.057 "crdt2": 0, 00:17:51.057 "crdt3": 0 00:17:51.057 } 00:17:51.058 }, 00:17:51.058 { 00:17:51.058 "method": "nvmf_create_transport", 00:17:51.058 "params": { 00:17:51.058 "abort_timeout_sec": 1, 00:17:51.058 "ack_timeout": 0, 00:17:51.058 "buf_cache_size": 4294967295, 00:17:51.058 "c2h_success": false, 00:17:51.058 "data_wr_pool_size": 0, 00:17:51.058 "dif_insert_or_strip": false, 00:17:51.058 "in_capsule_data_size": 4096, 00:17:51.058 "io_unit_size": 131072, 00:17:51.058 "max_aq_depth": 128, 00:17:51.058 "max_io_qpairs_per_ctrlr": 127, 00:17:51.058 "max_io_size": 131072, 00:17:51.058 "max_queue_depth": 128, 00:17:51.058 "num_shared_buffers": 511, 00:17:51.058 "sock_priority": 0, 00:17:51.058 "trtype": "TCP", 00:17:51.058 "zcopy": false 00:17:51.058 } 00:17:51.058 }, 00:17:51.058 { 00:17:51.058 "method": "nvmf_create_subsystem", 00:17:51.058 "params": { 00:17:51.058 "allow_any_host": false, 00:17:51.058 "ana_reporting": false, 00:17:51.058 "max_cntlid": 65519, 00:17:51.058 "max_namespaces": 10, 00:17:51.058 "min_cntlid": 1, 00:17:51.058 "model_number": "SPDK bdev Controller", 00:17:51.058 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.058 "serial_number": "SPDK00000000000001" 00:17:51.058 } 00:17:51.058 }, 00:17:51.058 { 00:17:51.058 "method": "nvmf_subsystem_add_host", 00:17:51.058 "params": { 00:17:51.058 "host": "nqn.2016-06.io.spdk:host1", 00:17:51.058 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.058 "psk": "/tmp/tmp.JFjRFTOUGt" 00:17:51.058 } 00:17:51.058 }, 00:17:51.058 { 00:17:51.058 "method": "nvmf_subsystem_add_ns", 00:17:51.058 "params": { 00:17:51.058 "namespace": { 00:17:51.058 "bdev_name": "malloc0", 00:17:51.058 "nguid": "C8FA9D5E282C4C74B53A6D825FAACD56", 00:17:51.058 "no_auto_visible": false, 00:17:51.058 "nsid": 1, 00:17:51.058 "uuid": "c8fa9d5e-282c-4c74-b53a-6d825faacd56" 00:17:51.058 }, 00:17:51.058 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:51.058 } 00:17:51.058 }, 00:17:51.058 { 00:17:51.058 "method": "nvmf_subsystem_add_listener", 00:17:51.058 "params": { 00:17:51.058 "listen_address": { 00:17:51.058 "adrfam": "IPv4", 00:17:51.058 "traddr": "10.0.0.2", 00:17:51.058 "trsvcid": "4420", 00:17:51.058 "trtype": "TCP" 00:17:51.058 }, 00:17:51.058 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.058 "secure_channel": true 00:17:51.058 } 00:17:51.058 } 00:17:51.058 ] 00:17:51.058 } 00:17:51.058 ] 00:17:51.058 }' 00:17:51.058 05:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:51.058 05:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.058 05:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83917 00:17:51.058 05:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:51.058 05:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83917 00:17:51.058 05:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83917 ']' 00:17:51.058 05:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.058 05:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:51.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.058 05:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.058 05:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:51.058 05:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.058 [2024-07-25 05:23:55.095599] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:17:51.058 [2024-07-25 05:23:55.095726] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.316 [2024-07-25 05:23:55.237443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.316 [2024-07-25 05:23:55.293479] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.316 [2024-07-25 05:23:55.293530] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.316 [2024-07-25 05:23:55.293541] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.316 [2024-07-25 05:23:55.293550] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.316 [2024-07-25 05:23:55.293557] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.316 [2024-07-25 05:23:55.293645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.317 [2024-07-25 05:23:55.476999] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.317 [2024-07-25 05:23:55.492876] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:51.574 [2024-07-25 05:23:55.508902] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:51.574 [2024-07-25 05:23:55.509119] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.142 05:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:52.142 05:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:52.142 05:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:52.142 05:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:52.142 05:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.142 05:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.142 05:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=83961 00:17:52.142 05:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 83961 /var/tmp/bdevperf.sock 00:17:52.142 05:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83961 ']' 00:17:52.142 05:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.142 05:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:52.142 05:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:17:52.142 "subsystems": [ 00:17:52.142 { 00:17:52.142 "subsystem": "keyring", 00:17:52.142 "config": [] 00:17:52.142 }, 00:17:52.142 { 00:17:52.142 "subsystem": "iobuf", 00:17:52.142 "config": [ 00:17:52.142 { 00:17:52.142 "method": "iobuf_set_options", 00:17:52.142 "params": { 00:17:52.142 "large_bufsize": 135168, 00:17:52.142 "large_pool_count": 1024, 00:17:52.142 "small_bufsize": 8192, 00:17:52.142 "small_pool_count": 8192 00:17:52.142 } 00:17:52.142 } 00:17:52.142 ] 00:17:52.142 }, 00:17:52.142 { 00:17:52.142 "subsystem": "sock", 00:17:52.142 "config": [ 00:17:52.142 { 00:17:52.142 "method": "sock_set_default_impl", 00:17:52.142 "params": { 00:17:52.142 "impl_name": "posix" 00:17:52.142 } 00:17:52.142 }, 00:17:52.142 { 00:17:52.142 "method": "sock_impl_set_options", 00:17:52.142 "params": { 00:17:52.142 "enable_ktls": false, 00:17:52.142 "enable_placement_id": 0, 00:17:52.142 "enable_quickack": false, 00:17:52.142 "enable_recv_pipe": true, 00:17:52.142 "enable_zerocopy_send_client": false, 00:17:52.142 "enable_zerocopy_send_server": true, 00:17:52.142 "impl_name": "ssl", 00:17:52.142 "recv_buf_size": 4096, 00:17:52.142 "send_buf_size": 4096, 00:17:52.142 "tls_version": 0, 00:17:52.142 "zerocopy_threshold": 0 00:17:52.142 } 00:17:52.142 }, 00:17:52.142 { 00:17:52.142 "method": "sock_impl_set_options", 00:17:52.142 "params": { 00:17:52.142 "enable_ktls": false, 00:17:52.142 "enable_placement_id": 0, 00:17:52.142 "enable_quickack": false, 00:17:52.142 "enable_recv_pipe": true, 00:17:52.142 "enable_zerocopy_send_client": false, 00:17:52.142 "enable_zerocopy_send_server": true, 00:17:52.142 "impl_name": "posix", 00:17:52.142 "recv_buf_size": 2097152, 00:17:52.142 "send_buf_size": 2097152, 00:17:52.142 "tls_version": 0, 00:17:52.142 "zerocopy_threshold": 0 00:17:52.142 } 00:17:52.142 } 00:17:52.142 ] 00:17:52.142 }, 00:17:52.142 { 00:17:52.142 "subsystem": "vmd", 00:17:52.142 "config": [] 00:17:52.142 }, 00:17:52.142 { 00:17:52.142 "subsystem": "accel", 00:17:52.142 "config": [ 00:17:52.142 { 00:17:52.142 "method": "accel_set_options", 00:17:52.142 "params": { 00:17:52.142 "buf_count": 2048, 00:17:52.142 "large_cache_size": 16, 00:17:52.142 "sequence_count": 2048, 00:17:52.142 "small_cache_size": 128, 00:17:52.142 "task_count": 2048 00:17:52.142 } 00:17:52.142 } 00:17:52.142 ] 00:17:52.142 }, 00:17:52.142 { 00:17:52.142 "subsystem": "bdev", 00:17:52.142 "config": [ 00:17:52.142 { 00:17:52.142 "method": "bdev_set_options", 00:17:52.142 "params": { 00:17:52.142 "bdev_auto_examine": true, 00:17:52.142 "bdev_io_cache_size": 256, 00:17:52.142 "bdev_io_pool_size": 65535, 00:17:52.142 "iobuf_large_cache_size": 16, 00:17:52.142 "iobuf_small_cache_size": 128 00:17:52.142 } 00:17:52.142 }, 00:17:52.142 { 00:17:52.142 "method": "bdev_raid_set_options", 00:17:52.142 "params": { 00:17:52.142 "process_max_bandwidth_mb_sec": 0, 00:17:52.142 "process_window_size_kb": 1024 00:17:52.142 } 00:17:52.142 }, 00:17:52.142 { 00:17:52.142 "method": "bdev_iscsi_set_options", 00:17:52.142 "params": { 00:17:52.142 "timeout_sec": 30 00:17:52.142 } 00:17:52.142 }, 00:17:52.142 { 00:17:52.142 "method": "bdev_nvme_set_options", 00:17:52.142 "params": { 00:17:52.142 "action_on_timeout": "none", 00:17:52.142 "allow_accel_sequence": false, 00:17:52.142 "arbitration_burst": 0, 00:17:52.142 "bdev_retry_count": 3, 00:17:52.142 "ctrlr_loss_timeout_sec": 0, 00:17:52.142 "delay_cmd_submit": true, 00:17:52.142 "dhchap_dhgroups": [ 00:17:52.142 "null", 00:17:52.142 "ffdhe2048", 00:17:52.142 "ffdhe3072", 00:17:52.142 "ffdhe4096", 00:17:52.142 "ffdhe6144", 00:17:52.142 "ffdhe8192" 00:17:52.142 ], 00:17:52.142 "dhchap_digests": [ 00:17:52.142 "sha256", 00:17:52.142 "sha384", 00:17:52.142 "sha512" 00:17:52.142 ], 00:17:52.142 "disable_auto_failback": false, 00:17:52.142 "fast_io_fail_timeout_sec": 0, 00:17:52.142 "generate_uuids": false, 00:17:52.142 "high_priority_weight": 0, 00:17:52.142 "io_path_stat": false, 00:17:52.142 "io_queue_requests": 512, 00:17:52.142 "keep_alive_timeout_ms": 10000, 00:17:52.142 "low_priority_weight": 0, 00:17:52.142 "medium_priority_weight": 0, 00:17:52.142 "nvme_adminq_poll_period_us": 10000, 00:17:52.142 "nvme_error_stat": false, 00:17:52.142 "nvme_ioq_poll_period_us": 0, 00:17:52.142 "rdma_cm_event_timeout_ms": 0, 00:17:52.142 "rdma_max_cq_size": 0, 00:17:52.142 "rdma_srq_size": 0, 00:17:52.142 "reconnect_delay_sec": 0, 00:17:52.142 "timeout_admin_us": 0, 00:17:52.142 "timeout_us": 0, 00:17:52.142 "transport_ack_timeout": 0, 00:17:52.142 "transport_retry_count": 4, 00:17:52.142 "transport_tos": 0 00:17:52.142 } 00:17:52.142 }, 00:17:52.142 { 00:17:52.142 "method": "bdev_nvme_attach_controller", 00:17:52.142 "params": { 00:17:52.142 "adrfam": "IPv4", 00:17:52.142 "ctrlr_loss_timeout_sec": 0, 00:17:52.142 "ddgst": false, 00:17:52.142 "fast_io_fail_timeout_sec": 0, 00:17:52.142 "hdgst": false, 00:17:52.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.142 "name": "TLSTEST", 00:17:52.142 "prchk_guard": false, 00:17:52.142 "prchk_reftag": false, 00:17:52.142 "psk": "/tmp/tmp.JFjRFTOUGt", 00:17:52.143 "reconnect_delay_sec": 0, 00:17:52.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.143 "traddr": "10.0.0.2", 00:17:52.143 "trsvcid": "4420", 00:17:52.143 "trtype": "TCP" 00:17:52.143 } 00:17:52.143 }, 00:17:52.143 { 00:17:52.143 "method": "bdev_nvme_set_hotplug", 00:17:52.143 "params": { 00:17:52.143 "enable": false, 00:17:52.143 "period_us": 100000 00:17:52.143 } 00:17:52.143 }, 00:17:52.143 { 00:17:52.143 "method": "bdev_wait_for_examine" 00:17:52.143 } 00:17:52.143 ] 00:17:52.143 }, 00:17:52.143 { 00:17:52.143 "subsystem": "nbd", 00:17:52.143 "config": [] 00:17:52.143 } 00:17:52.143 ] 00:17:52.143 }' 00:17:52.143 05:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:52.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.143 05:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.143 05:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:52.143 05:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.143 [2024-07-25 05:23:56.177591] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:17:52.143 [2024-07-25 05:23:56.177694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83961 ] 00:17:52.143 [2024-07-25 05:23:56.315513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.428 [2024-07-25 05:23:56.421646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.428 [2024-07-25 05:23:56.555263] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:52.428 [2024-07-25 05:23:56.555390] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:52.995 05:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:52.995 05:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:52.995 05:23:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:53.253 Running I/O for 10 seconds... 00:18:03.223 00:18:03.223 Latency(us) 00:18:03.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.223 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:03.223 Verification LBA range: start 0x0 length 0x2000 00:18:03.223 TLSTESTn1 : 10.02 3787.09 14.79 0.00 0.00 33732.06 7298.33 47900.86 00:18:03.223 =================================================================================================================== 00:18:03.223 Total : 3787.09 14.79 0.00 0.00 33732.06 7298.33 47900.86 00:18:03.223 0 00:18:03.223 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:03.223 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 83961 00:18:03.223 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83961 ']' 00:18:03.223 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83961 00:18:03.223 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:03.223 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:03.223 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83961 00:18:03.223 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:03.223 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:03.223 killing process with pid 83961 00:18:03.223 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83961' 00:18:03.223 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83961 00:18:03.223 Received shutdown signal, test time was about 10.000000 seconds 00:18:03.223 00:18:03.223 Latency(us) 00:18:03.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.223 =================================================================================================================== 00:18:03.223 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.223 [2024-07-25 05:24:07.381227] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:03.223 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83961 00:18:03.481 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 83917 00:18:03.481 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83917 ']' 00:18:03.481 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83917 00:18:03.481 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:03.481 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:03.481 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83917 00:18:03.481 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:03.481 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:03.481 killing process with pid 83917 00:18:03.481 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83917' 00:18:03.481 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83917 00:18:03.481 [2024-07-25 05:24:07.568903] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:03.481 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83917 00:18:03.739 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:03.739 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:03.739 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:03.739 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.739 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84106 00:18:03.739 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:03.739 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84106 00:18:03.739 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84106 ']' 00:18:03.739 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.739 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:03.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.739 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.739 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:03.739 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.739 [2024-07-25 05:24:07.814773] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:18:03.739 [2024-07-25 05:24:07.814920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.997 [2024-07-25 05:24:07.962001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.998 [2024-07-25 05:24:08.033352] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.998 [2024-07-25 05:24:08.033415] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.998 [2024-07-25 05:24:08.033432] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.998 [2024-07-25 05:24:08.033442] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.998 [2024-07-25 05:24:08.033451] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.998 [2024-07-25 05:24:08.033482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.931 05:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:04.931 05:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:04.931 05:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:04.931 05:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:04.931 05:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.931 05:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.931 05:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.JFjRFTOUGt 00:18:04.931 05:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JFjRFTOUGt 00:18:04.932 05:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:04.932 [2024-07-25 05:24:09.084478] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.932 05:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:05.189 05:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:05.447 [2024-07-25 05:24:09.612603] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.447 [2024-07-25 05:24:09.612822] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.705 05:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:05.963 malloc0 00:18:05.963 05:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:06.222 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JFjRFTOUGt 00:18:06.480 [2024-07-25 05:24:10.447590] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:06.480 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:06.480 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=84209 00:18:06.480 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:06.480 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 84209 /var/tmp/bdevperf.sock 00:18:06.480 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84209 ']' 00:18:06.480 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.480 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:06.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.480 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.480 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:06.480 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.480 [2024-07-25 05:24:10.513445] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:18:06.480 [2024-07-25 05:24:10.513537] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84209 ] 00:18:06.480 [2024-07-25 05:24:10.648142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.738 [2024-07-25 05:24:10.743361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.304 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:07.304 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:07.304 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JFjRFTOUGt 00:18:07.869 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:07.869 [2024-07-25 05:24:12.034163] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:08.126 nvme0n1 00:18:08.127 05:24:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:08.127 Running I/O for 1 seconds... 00:18:09.503 00:18:09.503 Latency(us) 00:18:09.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.503 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:09.503 Verification LBA range: start 0x0 length 0x2000 00:18:09.503 nvme0n1 : 1.02 3759.18 14.68 0.00 0.00 33665.29 7089.80 23235.49 00:18:09.503 =================================================================================================================== 00:18:09.503 Total : 3759.18 14.68 0.00 0.00 33665.29 7089.80 23235.49 00:18:09.503 0 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 84209 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84209 ']' 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84209 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84209 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84209' 00:18:09.503 killing process with pid 84209 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84209 00:18:09.503 Received shutdown signal, test time was about 1.000000 seconds 00:18:09.503 00:18:09.503 Latency(us) 00:18:09.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.503 =================================================================================================================== 00:18:09.503 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84209 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 84106 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84106 ']' 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84106 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84106 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:09.503 killing process with pid 84106 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84106' 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84106 00:18:09.503 [2024-07-25 05:24:13.500017] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84106 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84284 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84284 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84284 ']' 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:09.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:09.503 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.762 [2024-07-25 05:24:13.729349] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:18:09.762 [2024-07-25 05:24:13.729456] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.762 [2024-07-25 05:24:13.866290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.762 [2024-07-25 05:24:13.926252] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.762 [2024-07-25 05:24:13.926308] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.762 [2024-07-25 05:24:13.926320] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.762 [2024-07-25 05:24:13.926328] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.762 [2024-07-25 05:24:13.926335] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.762 [2024-07-25 05:24:13.926363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.021 [2024-07-25 05:24:14.059232] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.021 malloc0 00:18:10.021 [2024-07-25 05:24:14.086149] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:10.021 [2024-07-25 05:24:14.086351] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=84315 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 84315 /var/tmp/bdevperf.sock 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84315 ']' 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:10.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:10.021 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.021 [2024-07-25 05:24:14.185448] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:18:10.021 [2024-07-25 05:24:14.185578] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84315 ] 00:18:10.279 [2024-07-25 05:24:14.325890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.279 [2024-07-25 05:24:14.395879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.244 05:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:11.244 05:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:11.244 05:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JFjRFTOUGt 00:18:11.503 05:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:11.761 [2024-07-25 05:24:15.772005] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:11.761 nvme0n1 00:18:11.761 05:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:12.020 Running I/O for 1 seconds... 00:18:12.954 00:18:12.954 Latency(us) 00:18:12.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.954 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:12.954 Verification LBA range: start 0x0 length 0x2000 00:18:12.954 nvme0n1 : 1.03 3611.79 14.11 0.00 0.00 35024.37 8877.15 22282.24 00:18:12.954 =================================================================================================================== 00:18:12.954 Total : 3611.79 14.11 0.00 0.00 35024.37 8877.15 22282.24 00:18:12.954 0 00:18:12.954 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:18:12.954 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.954 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.213 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.213 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:18:13.213 "subsystems": [ 00:18:13.213 { 00:18:13.213 "subsystem": "keyring", 00:18:13.213 "config": [ 00:18:13.213 { 00:18:13.213 "method": "keyring_file_add_key", 00:18:13.213 "params": { 00:18:13.213 "name": "key0", 00:18:13.213 "path": "/tmp/tmp.JFjRFTOUGt" 00:18:13.213 } 00:18:13.213 } 00:18:13.213 ] 00:18:13.213 }, 00:18:13.213 { 00:18:13.213 "subsystem": "iobuf", 00:18:13.213 "config": [ 00:18:13.213 { 00:18:13.213 "method": "iobuf_set_options", 00:18:13.213 "params": { 00:18:13.213 "large_bufsize": 135168, 00:18:13.213 "large_pool_count": 1024, 00:18:13.213 "small_bufsize": 8192, 00:18:13.213 "small_pool_count": 8192 00:18:13.213 } 00:18:13.213 } 00:18:13.213 ] 00:18:13.213 }, 00:18:13.213 { 00:18:13.213 "subsystem": "sock", 00:18:13.213 "config": [ 00:18:13.213 { 00:18:13.213 "method": "sock_set_default_impl", 00:18:13.213 "params": { 00:18:13.213 "impl_name": "posix" 00:18:13.213 } 00:18:13.213 }, 00:18:13.213 { 00:18:13.213 "method": "sock_impl_set_options", 00:18:13.213 "params": { 00:18:13.213 "enable_ktls": false, 00:18:13.213 "enable_placement_id": 0, 00:18:13.213 "enable_quickack": false, 00:18:13.213 "enable_recv_pipe": true, 00:18:13.213 "enable_zerocopy_send_client": false, 00:18:13.213 "enable_zerocopy_send_server": true, 00:18:13.213 "impl_name": "ssl", 00:18:13.213 "recv_buf_size": 4096, 00:18:13.213 "send_buf_size": 4096, 00:18:13.213 "tls_version": 0, 00:18:13.213 "zerocopy_threshold": 0 00:18:13.213 } 00:18:13.213 }, 00:18:13.213 { 00:18:13.213 "method": "sock_impl_set_options", 00:18:13.213 "params": { 00:18:13.213 "enable_ktls": false, 00:18:13.213 "enable_placement_id": 0, 00:18:13.213 "enable_quickack": false, 00:18:13.213 "enable_recv_pipe": true, 00:18:13.213 "enable_zerocopy_send_client": false, 00:18:13.213 "enable_zerocopy_send_server": true, 00:18:13.213 "impl_name": "posix", 00:18:13.213 "recv_buf_size": 2097152, 00:18:13.213 "send_buf_size": 2097152, 00:18:13.213 "tls_version": 0, 00:18:13.213 "zerocopy_threshold": 0 00:18:13.213 } 00:18:13.213 } 00:18:13.213 ] 00:18:13.213 }, 00:18:13.213 { 00:18:13.213 "subsystem": "vmd", 00:18:13.213 "config": [] 00:18:13.213 }, 00:18:13.213 { 00:18:13.213 "subsystem": "accel", 00:18:13.213 "config": [ 00:18:13.213 { 00:18:13.213 "method": "accel_set_options", 00:18:13.213 "params": { 00:18:13.213 "buf_count": 2048, 00:18:13.213 "large_cache_size": 16, 00:18:13.213 "sequence_count": 2048, 00:18:13.213 "small_cache_size": 128, 00:18:13.213 "task_count": 2048 00:18:13.213 } 00:18:13.213 } 00:18:13.213 ] 00:18:13.213 }, 00:18:13.213 { 00:18:13.213 "subsystem": "bdev", 00:18:13.213 "config": [ 00:18:13.213 { 00:18:13.213 "method": "bdev_set_options", 00:18:13.213 "params": { 00:18:13.213 "bdev_auto_examine": true, 00:18:13.213 "bdev_io_cache_size": 256, 00:18:13.213 "bdev_io_pool_size": 65535, 00:18:13.213 "iobuf_large_cache_size": 16, 00:18:13.213 "iobuf_small_cache_size": 128 00:18:13.213 } 00:18:13.213 }, 00:18:13.213 { 00:18:13.213 "method": "bdev_raid_set_options", 00:18:13.213 "params": { 00:18:13.213 "process_max_bandwidth_mb_sec": 0, 00:18:13.213 "process_window_size_kb": 1024 00:18:13.213 } 00:18:13.213 }, 00:18:13.213 { 00:18:13.213 "method": "bdev_iscsi_set_options", 00:18:13.213 "params": { 00:18:13.213 "timeout_sec": 30 00:18:13.213 } 00:18:13.213 }, 00:18:13.213 { 00:18:13.213 "method": "bdev_nvme_set_options", 00:18:13.213 "params": { 00:18:13.213 "action_on_timeout": "none", 00:18:13.213 "allow_accel_sequence": false, 00:18:13.213 "arbitration_burst": 0, 00:18:13.213 "bdev_retry_count": 3, 00:18:13.213 "ctrlr_loss_timeout_sec": 0, 00:18:13.213 "delay_cmd_submit": true, 00:18:13.213 "dhchap_dhgroups": [ 00:18:13.213 "null", 00:18:13.213 "ffdhe2048", 00:18:13.213 "ffdhe3072", 00:18:13.213 "ffdhe4096", 00:18:13.213 "ffdhe6144", 00:18:13.213 "ffdhe8192" 00:18:13.213 ], 00:18:13.213 "dhchap_digests": [ 00:18:13.213 "sha256", 00:18:13.213 "sha384", 00:18:13.213 "sha512" 00:18:13.213 ], 00:18:13.213 "disable_auto_failback": false, 00:18:13.213 "fast_io_fail_timeout_sec": 0, 00:18:13.213 "generate_uuids": false, 00:18:13.213 "high_priority_weight": 0, 00:18:13.213 "io_path_stat": false, 00:18:13.213 "io_queue_requests": 0, 00:18:13.213 "keep_alive_timeout_ms": 10000, 00:18:13.213 "low_priority_weight": 0, 00:18:13.213 "medium_priority_weight": 0, 00:18:13.213 "nvme_adminq_poll_period_us": 10000, 00:18:13.213 "nvme_error_stat": false, 00:18:13.213 "nvme_ioq_poll_period_us": 0, 00:18:13.213 "rdma_cm_event_timeout_ms": 0, 00:18:13.213 "rdma_max_cq_size": 0, 00:18:13.213 "rdma_srq_size": 0, 00:18:13.213 "reconnect_delay_sec": 0, 00:18:13.213 "timeout_admin_us": 0, 00:18:13.213 "timeout_us": 0, 00:18:13.213 "transport_ack_timeout": 0, 00:18:13.213 "transport_retry_count": 4, 00:18:13.213 "transport_tos": 0 00:18:13.213 } 00:18:13.213 }, 00:18:13.213 { 00:18:13.213 "method": "bdev_nvme_set_hotplug", 00:18:13.213 "params": { 00:18:13.213 "enable": false, 00:18:13.213 "period_us": 100000 00:18:13.213 } 00:18:13.213 }, 00:18:13.213 { 00:18:13.213 "method": "bdev_malloc_create", 00:18:13.213 "params": { 00:18:13.213 "block_size": 4096, 00:18:13.213 "dif_is_head_of_md": false, 00:18:13.213 "dif_pi_format": 0, 00:18:13.213 "dif_type": 0, 00:18:13.213 "md_size": 0, 00:18:13.213 "name": "malloc0", 00:18:13.213 "num_blocks": 8192, 00:18:13.213 "optimal_io_boundary": 0, 00:18:13.213 "physical_block_size": 4096, 00:18:13.213 "uuid": "25eb312d-24e0-4c47-9f60-e6f9c413bfce" 00:18:13.213 } 00:18:13.213 }, 00:18:13.213 { 00:18:13.213 "method": "bdev_wait_for_examine" 00:18:13.213 } 00:18:13.213 ] 00:18:13.213 }, 00:18:13.213 { 00:18:13.213 "subsystem": "nbd", 00:18:13.213 "config": [] 00:18:13.213 }, 00:18:13.213 { 00:18:13.213 "subsystem": "scheduler", 00:18:13.213 "config": [ 00:18:13.213 { 00:18:13.213 "method": "framework_set_scheduler", 00:18:13.213 "params": { 00:18:13.213 "name": "static" 00:18:13.214 } 00:18:13.214 } 00:18:13.214 ] 00:18:13.214 }, 00:18:13.214 { 00:18:13.214 "subsystem": "nvmf", 00:18:13.214 "config": [ 00:18:13.214 { 00:18:13.214 "method": "nvmf_set_config", 00:18:13.214 "params": { 00:18:13.214 "admin_cmd_passthru": { 00:18:13.214 "identify_ctrlr": false 00:18:13.214 }, 00:18:13.214 "discovery_filter": "match_any" 00:18:13.214 } 00:18:13.214 }, 00:18:13.214 { 00:18:13.214 "method": "nvmf_set_max_subsystems", 00:18:13.214 "params": { 00:18:13.214 "max_subsystems": 1024 00:18:13.214 } 00:18:13.214 }, 00:18:13.214 { 00:18:13.214 "method": "nvmf_set_crdt", 00:18:13.214 "params": { 00:18:13.214 "crdt1": 0, 00:18:13.214 "crdt2": 0, 00:18:13.214 "crdt3": 0 00:18:13.214 } 00:18:13.214 }, 00:18:13.214 { 00:18:13.214 "method": "nvmf_create_transport", 00:18:13.214 "params": { 00:18:13.214 "abort_timeout_sec": 1, 00:18:13.214 "ack_timeout": 0, 00:18:13.214 "buf_cache_size": 4294967295, 00:18:13.214 "c2h_success": false, 00:18:13.214 "data_wr_pool_size": 0, 00:18:13.214 "dif_insert_or_strip": false, 00:18:13.214 "in_capsule_data_size": 4096, 00:18:13.214 "io_unit_size": 131072, 00:18:13.214 "max_aq_depth": 128, 00:18:13.214 "max_io_qpairs_per_ctrlr": 127, 00:18:13.214 "max_io_size": 131072, 00:18:13.214 "max_queue_depth": 128, 00:18:13.214 "num_shared_buffers": 511, 00:18:13.214 "sock_priority": 0, 00:18:13.214 "trtype": "TCP", 00:18:13.214 "zcopy": false 00:18:13.214 } 00:18:13.214 }, 00:18:13.214 { 00:18:13.214 "method": "nvmf_create_subsystem", 00:18:13.214 "params": { 00:18:13.214 "allow_any_host": false, 00:18:13.214 "ana_reporting": false, 00:18:13.214 "max_cntlid": 65519, 00:18:13.214 "max_namespaces": 32, 00:18:13.214 "min_cntlid": 1, 00:18:13.214 "model_number": "SPDK bdev Controller", 00:18:13.214 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.214 "serial_number": "00000000000000000000" 00:18:13.214 } 00:18:13.214 }, 00:18:13.214 { 00:18:13.214 "method": "nvmf_subsystem_add_host", 00:18:13.214 "params": { 00:18:13.214 "host": "nqn.2016-06.io.spdk:host1", 00:18:13.214 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.214 "psk": "key0" 00:18:13.214 } 00:18:13.214 }, 00:18:13.214 { 00:18:13.214 "method": "nvmf_subsystem_add_ns", 00:18:13.214 "params": { 00:18:13.214 "namespace": { 00:18:13.214 "bdev_name": "malloc0", 00:18:13.214 "nguid": "25EB312D24E04C479F60E6F9C413BFCE", 00:18:13.214 "no_auto_visible": false, 00:18:13.214 "nsid": 1, 00:18:13.214 "uuid": "25eb312d-24e0-4c47-9f60-e6f9c413bfce" 00:18:13.214 }, 00:18:13.214 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:18:13.214 } 00:18:13.214 }, 00:18:13.214 { 00:18:13.214 "method": "nvmf_subsystem_add_listener", 00:18:13.214 "params": { 00:18:13.214 "listen_address": { 00:18:13.214 "adrfam": "IPv4", 00:18:13.214 "traddr": "10.0.0.2", 00:18:13.214 "trsvcid": "4420", 00:18:13.214 "trtype": "TCP" 00:18:13.214 }, 00:18:13.214 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.214 "secure_channel": false, 00:18:13.214 "sock_impl": "ssl" 00:18:13.214 } 00:18:13.214 } 00:18:13.214 ] 00:18:13.214 } 00:18:13.214 ] 00:18:13.214 }' 00:18:13.214 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:13.473 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:18:13.473 "subsystems": [ 00:18:13.473 { 00:18:13.473 "subsystem": "keyring", 00:18:13.473 "config": [ 00:18:13.473 { 00:18:13.473 "method": "keyring_file_add_key", 00:18:13.473 "params": { 00:18:13.473 "name": "key0", 00:18:13.473 "path": "/tmp/tmp.JFjRFTOUGt" 00:18:13.473 } 00:18:13.473 } 00:18:13.473 ] 00:18:13.473 }, 00:18:13.473 { 00:18:13.473 "subsystem": "iobuf", 00:18:13.473 "config": [ 00:18:13.473 { 00:18:13.473 "method": "iobuf_set_options", 00:18:13.473 "params": { 00:18:13.473 "large_bufsize": 135168, 00:18:13.473 "large_pool_count": 1024, 00:18:13.473 "small_bufsize": 8192, 00:18:13.473 "small_pool_count": 8192 00:18:13.473 } 00:18:13.473 } 00:18:13.473 ] 00:18:13.473 }, 00:18:13.473 { 00:18:13.473 "subsystem": "sock", 00:18:13.473 "config": [ 00:18:13.473 { 00:18:13.473 "method": "sock_set_default_impl", 00:18:13.473 "params": { 00:18:13.473 "impl_name": "posix" 00:18:13.473 } 00:18:13.473 }, 00:18:13.473 { 00:18:13.473 "method": "sock_impl_set_options", 00:18:13.473 "params": { 00:18:13.473 "enable_ktls": false, 00:18:13.473 "enable_placement_id": 0, 00:18:13.473 "enable_quickack": false, 00:18:13.473 "enable_recv_pipe": true, 00:18:13.473 "enable_zerocopy_send_client": false, 00:18:13.473 "enable_zerocopy_send_server": true, 00:18:13.473 "impl_name": "ssl", 00:18:13.473 "recv_buf_size": 4096, 00:18:13.473 "send_buf_size": 4096, 00:18:13.473 "tls_version": 0, 00:18:13.473 "zerocopy_threshold": 0 00:18:13.473 } 00:18:13.473 }, 00:18:13.473 { 00:18:13.473 "method": "sock_impl_set_options", 00:18:13.473 "params": { 00:18:13.473 "enable_ktls": false, 00:18:13.473 "enable_placement_id": 0, 00:18:13.473 "enable_quickack": false, 00:18:13.473 "enable_recv_pipe": true, 00:18:13.473 "enable_zerocopy_send_client": false, 00:18:13.473 "enable_zerocopy_send_server": true, 00:18:13.473 "impl_name": "posix", 00:18:13.473 "recv_buf_size": 2097152, 00:18:13.473 "send_buf_size": 2097152, 00:18:13.473 "tls_version": 0, 00:18:13.473 "zerocopy_threshold": 0 00:18:13.473 } 00:18:13.473 } 00:18:13.473 ] 00:18:13.473 }, 00:18:13.473 { 00:18:13.473 "subsystem": "vmd", 00:18:13.473 "config": [] 00:18:13.473 }, 00:18:13.473 { 00:18:13.473 "subsystem": "accel", 00:18:13.473 "config": [ 00:18:13.473 { 00:18:13.473 "method": "accel_set_options", 00:18:13.473 "params": { 00:18:13.473 "buf_count": 2048, 00:18:13.473 "large_cache_size": 16, 00:18:13.473 "sequence_count": 2048, 00:18:13.473 "small_cache_size": 128, 00:18:13.473 "task_count": 2048 00:18:13.473 } 00:18:13.473 } 00:18:13.474 ] 00:18:13.474 }, 00:18:13.474 { 00:18:13.474 "subsystem": "bdev", 00:18:13.474 "config": [ 00:18:13.474 { 00:18:13.474 "method": "bdev_set_options", 00:18:13.474 "params": { 00:18:13.474 "bdev_auto_examine": true, 00:18:13.474 "bdev_io_cache_size": 256, 00:18:13.474 "bdev_io_pool_size": 65535, 00:18:13.474 "iobuf_large_cache_size": 16, 00:18:13.474 "iobuf_small_cache_size": 128 00:18:13.474 } 00:18:13.474 }, 00:18:13.474 { 00:18:13.474 "method": "bdev_raid_set_options", 00:18:13.474 "params": { 00:18:13.474 "process_max_bandwidth_mb_sec": 0, 00:18:13.474 "process_window_size_kb": 1024 00:18:13.474 } 00:18:13.474 }, 00:18:13.474 { 00:18:13.474 "method": "bdev_iscsi_set_options", 00:18:13.474 "params": { 00:18:13.474 "timeout_sec": 30 00:18:13.474 } 00:18:13.474 }, 00:18:13.474 { 00:18:13.474 "method": "bdev_nvme_set_options", 00:18:13.474 "params": { 00:18:13.474 "action_on_timeout": "none", 00:18:13.474 "allow_accel_sequence": false, 00:18:13.474 "arbitration_burst": 0, 00:18:13.474 "bdev_retry_count": 3, 00:18:13.474 "ctrlr_loss_timeout_sec": 0, 00:18:13.474 "delay_cmd_submit": true, 00:18:13.474 "dhchap_dhgroups": [ 00:18:13.474 "null", 00:18:13.474 "ffdhe2048", 00:18:13.474 "ffdhe3072", 00:18:13.474 "ffdhe4096", 00:18:13.474 "ffdhe6144", 00:18:13.474 "ffdhe8192" 00:18:13.474 ], 00:18:13.474 "dhchap_digests": [ 00:18:13.474 "sha256", 00:18:13.474 "sha384", 00:18:13.474 "sha512" 00:18:13.474 ], 00:18:13.474 "disable_auto_failback": false, 00:18:13.474 "fast_io_fail_timeout_sec": 0, 00:18:13.474 "generate_uuids": false, 00:18:13.474 "high_priority_weight": 0, 00:18:13.474 "io_path_stat": false, 00:18:13.474 "io_queue_requests": 512, 00:18:13.474 "keep_alive_timeout_ms": 10000, 00:18:13.474 "low_priority_weight": 0, 00:18:13.474 "medium_priority_weight": 0, 00:18:13.474 "nvme_adminq_poll_period_us": 10000, 00:18:13.474 "nvme_error_stat": false, 00:18:13.474 "nvme_ioq_poll_period_us": 0, 00:18:13.474 "rdma_cm_event_timeout_ms": 0, 00:18:13.474 "rdma_max_cq_size": 0, 00:18:13.474 "rdma_srq_size": 0, 00:18:13.474 "reconnect_delay_sec": 0, 00:18:13.474 "timeout_admin_us": 0, 00:18:13.474 "timeout_us": 0, 00:18:13.474 "transport_ack_timeout": 0, 00:18:13.474 "transport_retry_count": 4, 00:18:13.474 "transport_tos": 0 00:18:13.474 } 00:18:13.474 }, 00:18:13.474 { 00:18:13.474 "method": "bdev_nvme_attach_controller", 00:18:13.474 "params": { 00:18:13.474 "adrfam": "IPv4", 00:18:13.474 "ctrlr_loss_timeout_sec": 0, 00:18:13.474 "ddgst": false, 00:18:13.474 "fast_io_fail_timeout_sec": 0, 00:18:13.474 "hdgst": false, 00:18:13.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.474 "name": "nvme0", 00:18:13.474 "prchk_guard": false, 00:18:13.474 "prchk_reftag": false, 00:18:13.474 "psk": "key0", 00:18:13.474 "reconnect_delay_sec": 0, 00:18:13.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.474 "traddr": "10.0.0.2", 00:18:13.474 "trsvcid": "4420", 00:18:13.474 "trtype": "TCP" 00:18:13.474 } 00:18:13.474 }, 00:18:13.474 { 00:18:13.474 "method": "bdev_nvme_set_hotplug", 00:18:13.474 "params": { 00:18:13.474 "enable": false, 00:18:13.474 "period_us": 100000 00:18:13.474 } 00:18:13.474 }, 00:18:13.474 { 00:18:13.474 "method": "bdev_enable_histogram", 00:18:13.474 "params": { 00:18:13.474 "enable": true, 00:18:13.474 "name": "nvme0n1" 00:18:13.474 } 00:18:13.474 }, 00:18:13.474 { 00:18:13.474 "method": "bdev_wait_for_examine" 00:18:13.474 } 00:18:13.474 ] 00:18:13.474 }, 00:18:13.474 { 00:18:13.474 "subsystem": "nbd", 00:18:13.474 "config": [] 00:18:13.474 } 00:18:13.474 ] 00:18:13.474 }' 00:18:13.474 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 84315 00:18:13.474 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84315 ']' 00:18:13.474 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84315 00:18:13.474 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:13.474 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:13.474 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84315 00:18:13.474 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:13.474 killing process with pid 84315 00:18:13.474 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:13.474 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84315' 00:18:13.474 Received shutdown signal, test time was about 1.000000 seconds 00:18:13.474 00:18:13.474 Latency(us) 00:18:13.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.474 =================================================================================================================== 00:18:13.474 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.474 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84315 00:18:13.474 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84315 00:18:13.733 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 84284 00:18:13.733 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84284 ']' 00:18:13.733 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84284 00:18:13.733 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:13.733 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:13.733 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84284 00:18:13.733 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:13.733 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:13.733 killing process with pid 84284 00:18:13.733 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84284' 00:18:13.733 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84284 00:18:13.733 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84284 00:18:13.733 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:18:13.733 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:18:13.733 "subsystems": [ 00:18:13.733 { 00:18:13.733 "subsystem": "keyring", 00:18:13.733 "config": [ 00:18:13.733 { 00:18:13.733 "method": "keyring_file_add_key", 00:18:13.733 "params": { 00:18:13.733 "name": "key0", 00:18:13.733 "path": "/tmp/tmp.JFjRFTOUGt" 00:18:13.733 } 00:18:13.733 } 00:18:13.733 ] 00:18:13.733 }, 00:18:13.733 { 00:18:13.733 "subsystem": "iobuf", 00:18:13.733 "config": [ 00:18:13.733 { 00:18:13.733 "method": "iobuf_set_options", 00:18:13.733 "params": { 00:18:13.733 "large_bufsize": 135168, 00:18:13.733 "large_pool_count": 1024, 00:18:13.733 "small_bufsize": 8192, 00:18:13.733 "small_pool_count": 8192 00:18:13.733 } 00:18:13.733 } 00:18:13.733 ] 00:18:13.733 }, 00:18:13.733 { 00:18:13.733 "subsystem": "sock", 00:18:13.733 "config": [ 00:18:13.733 { 00:18:13.733 "method": "sock_set_default_impl", 00:18:13.733 "params": { 00:18:13.733 "impl_name": "posix" 00:18:13.733 } 00:18:13.733 }, 00:18:13.734 { 00:18:13.734 "method": "sock_impl_set_options", 00:18:13.734 "params": { 00:18:13.734 "enable_ktls": false, 00:18:13.734 "enable_placement_id": 0, 00:18:13.734 "enable_quickack": false, 00:18:13.734 "enable_recv_pipe": true, 00:18:13.734 "enable_zerocopy_send_client": false, 00:18:13.734 "enable_zerocopy_send_server": true, 00:18:13.734 "impl_name": "ssl", 00:18:13.734 "recv_buf_size": 4096, 00:18:13.734 "send_buf_size": 4096, 00:18:13.734 "tls_version": 0, 00:18:13.734 "zerocopy_threshold": 0 00:18:13.734 } 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "method": "sock_impl_set_options", 00:18:13.734 "params": { 00:18:13.734 "enable_ktls": false, 00:18:13.734 "enable_placement_id": 0, 00:18:13.734 "enable_quickack": false, 00:18:13.734 "enable_recv_pipe": true, 00:18:13.734 "enable_zerocopy_send_client": false, 00:18:13.734 "enable_zerocopy_send_server": true, 00:18:13.734 "impl_name": "posix", 00:18:13.734 "recv_buf_size": 2097152, 00:18:13.734 "send_buf_size": 2097152, 00:18:13.734 "tls_version": 0, 00:18:13.734 "zerocopy_threshold": 0 00:18:13.734 } 00:18:13.734 } 00:18:13.734 ] 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "subsystem": "vmd", 00:18:13.734 "config": [] 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "subsystem": "accel", 00:18:13.734 "config": [ 00:18:13.734 { 00:18:13.734 "method": "accel_set_options", 00:18:13.734 "params": { 00:18:13.734 "buf_count": 2048, 00:18:13.734 "large_cache_size": 16, 00:18:13.734 "sequence_count": 2048, 00:18:13.734 "small_cache_size": 128, 00:18:13.734 "task_count": 2048 00:18:13.734 } 00:18:13.734 } 00:18:13.734 ] 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "subsystem": "bdev", 00:18:13.734 "config": [ 00:18:13.734 { 00:18:13.734 "method": "bdev_set_options", 00:18:13.734 "params": { 00:18:13.734 "bdev_auto_examine": true, 00:18:13.734 "bdev_io_cache_size": 256, 00:18:13.734 "bdev_io_pool_size": 65535, 00:18:13.734 "iobuf_large_cache_size": 16, 00:18:13.734 "iobuf_small_cache_size": 128 00:18:13.734 } 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "method": "bdev_raid_set_options", 00:18:13.734 "params": { 00:18:13.734 "process_max_bandwidth_mb_sec": 0, 00:18:13.734 "process_window_size_kb": 1024 00:18:13.734 } 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "method": "bdev_iscsi_set_options", 00:18:13.734 "params": { 00:18:13.734 "timeout_sec": 30 00:18:13.734 } 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "method": "bdev_nvme_set_options", 00:18:13.734 "params": { 00:18:13.734 "action_on_timeout": "none", 00:18:13.734 "allow_accel_sequence": false, 00:18:13.734 "arbitration_burst": 0, 00:18:13.734 "bdev_retry_count": 3, 00:18:13.734 "ctrlr_loss_timeout_sec": 0, 00:18:13.734 "delay_cmd_submit": true, 00:18:13.734 "dhchap_dhgroups": [ 00:18:13.734 "null", 00:18:13.734 "ffdhe2048", 00:18:13.734 "ffdhe3072", 00:18:13.734 "ffdhe4096", 00:18:13.734 "ffdhe6144", 00:18:13.734 "ffdhe8192" 00:18:13.734 ], 00:18:13.734 "dhchap_digests": [ 00:18:13.734 "sha256", 00:18:13.734 "sha384", 00:18:13.734 "sha512" 00:18:13.734 ], 00:18:13.734 "disable_auto_failback": false, 00:18:13.734 "fast_io_fail_timeout_sec": 0, 00:18:13.734 "generate_uuids": false, 00:18:13.734 "high_priority_weight": 0, 00:18:13.734 "io_path_stat": false, 00:18:13.734 "io_queue_requests": 0, 00:18:13.734 "keep_alive_timeout_ms": 10000, 00:18:13.734 "low_priority_weight": 0, 00:18:13.734 "medium_priority_weight": 0, 00:18:13.734 "nvme_adminq_poll_period_us": 10000, 00:18:13.734 "nvme_error_stat": false, 00:18:13.734 "nvme_ioq_poll_period_us": 0, 00:18:13.734 "rdma_cm_event_timeout_ms": 0, 00:18:13.734 "rdma_max_cq_size": 0, 00:18:13.734 "rdma_srq_size": 0, 00:18:13.734 "reconnect_delay_sec": 0, 00:18:13.734 "timeout_admin_us": 0, 00:18:13.734 "timeout_us": 0, 00:18:13.734 "transport_ack_timeout": 0, 00:18:13.734 "transport_retry_count": 4, 00:18:13.734 "transport_tos": 0 00:18:13.734 } 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "method": "bdev_nvme_set_hotplug", 00:18:13.734 "params": { 00:18:13.734 "enable": false, 00:18:13.734 "period_us": 100000 00:18:13.734 } 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "method": "bdev_malloc_create", 00:18:13.734 "params": { 00:18:13.734 "block_size": 4096, 00:18:13.734 "dif_is_head_of_md": false, 00:18:13.734 "dif_pi_format": 0, 00:18:13.734 "dif_type": 0, 00:18:13.734 "md_size": 0, 00:18:13.734 "name": "malloc0", 00:18:13.734 "num_blocks": 8192, 00:18:13.734 "optimal_io_boundary": 0, 00:18:13.734 "physical_block_size": 4096, 00:18:13.734 "uuid": "25eb312d-24e0-4c47-9f60-e6f9c413bfce" 00:18:13.734 } 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "method": "bdev_wait_for_examine" 00:18:13.734 } 00:18:13.734 ] 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "subsystem": "nbd", 00:18:13.734 "config": [] 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "subsystem": "scheduler", 00:18:13.734 "config": [ 00:18:13.734 { 00:18:13.734 "method": "framework_set_scheduler", 00:18:13.734 "params": { 00:18:13.734 "name": "static" 00:18:13.734 } 00:18:13.734 } 00:18:13.734 ] 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "subsystem": "nvmf", 00:18:13.734 "config": [ 00:18:13.734 { 00:18:13.734 "method": "nvmf_set_config", 00:18:13.734 "params": { 00:18:13.734 "admin_cmd_passthru": { 00:18:13.734 "identify_ctrlr": false 00:18:13.734 }, 00:18:13.734 "discovery_filter": "match_any" 00:18:13.734 } 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "method": "nvmf_set_max_subsystems", 00:18:13.734 "params": { 00:18:13.734 "max_subsystems": 1024 00:18:13.734 } 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "method": "nvmf_set_crdt", 00:18:13.734 "params": { 00:18:13.734 "crdt1": 0, 00:18:13.734 "crdt2": 0, 00:18:13.734 "crdt3": 0 00:18:13.734 } 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "method": "nvmf_create_transport", 00:18:13.734 "params": { 00:18:13.734 "abort_timeout_sec": 1, 00:18:13.734 "ack_timeout": 0, 00:18:13.734 "buf_cache_size": 4294967295, 00:18:13.734 "c2h_success": false, 00:18:13.734 "data_wr_pool_size": 0, 00:18:13.734 "dif_insert_or_strip": false, 00:18:13.734 "in_capsule_data_size": 4096, 00:18:13.734 "io_unit_size": 131072, 00:18:13.734 "max_aq_depth": 128, 00:18:13.734 "max_io_qpairs_per_ctrlr": 127, 00:18:13.734 "max_io_size": 131072, 00:18:13.734 "max_queue_depth": 128, 00:18:13.734 "num_shared_buffers": 511, 00:18:13.734 "sock_priority": 0, 00:18:13.734 "trtype": "TCP", 00:18:13.734 "zcopy": false 00:18:13.734 } 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "method": "nvmf_create_subsystem", 00:18:13.734 "params": { 00:18:13.734 "allow_any_host": false, 00:18:13.734 "ana_reporting": false, 00:18:13.734 "max_cntlid": 65519, 00:18:13.734 "max_namespaces": 32, 00:18:13.734 "min_cntlid": 1, 00:18:13.734 "model_number": "SPDK bdev Controller", 00:18:13.734 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.734 "serial_number": "00000000000000000000" 00:18:13.734 } 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "method": "nvmf_subsystem_add_host", 00:18:13.734 "params": { 00:18:13.734 "host": "nqn.2016-06.io.spdk:host1", 00:18:13.734 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.734 "psk": "key0" 00:18:13.734 } 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "method": "nvmf_subsystem_add_ns", 00:18:13.734 "params": { 00:18:13.734 "namespace": { 00:18:13.734 "bdev_name": "malloc0", 00:18:13.734 "nguid": "25EB312D24E04C479F60E6F9C413BFCE", 00:18:13.734 "no_auto_visible": false, 00:18:13.734 "nsid": 1, 00:18:13.734 "uuid": "25eb312d-24e0-4c47-9f60-e6f9c413bfce" 00:18:13.734 }, 00:18:13.734 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:18:13.734 } 00:18:13.734 }, 00:18:13.734 { 00:18:13.734 "method": "nvmf_subsystem_add_listener", 00:18:13.734 "params": { 00:18:13.734 "listen_address": { 00:18:13.734 "adrfam": "IPv4", 00:18:13.734 "traddr": "10.0.0.2", 00:18:13.734 "trsvcid": "4420", 00:18:13.734 "trtype": "TCP" 00:18:13.734 }, 00:18:13.734 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.734 "secure_channel": false, 00:18:13.734 "sock_impl": "ssl" 00:18:13.734 } 00:18:13.734 } 00:18:13.734 ] 00:18:13.734 } 00:18:13.734 ] 00:18:13.734 }' 00:18:13.735 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:13.735 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:13.735 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.735 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84406 00:18:13.735 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84406 00:18:13.735 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:13.735 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84406 ']' 00:18:13.735 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.735 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:13.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.735 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.735 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:13.735 05:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.993 [2024-07-25 05:24:17.953631] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:18:13.993 [2024-07-25 05:24:17.954125] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.993 [2024-07-25 05:24:18.091321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.993 [2024-07-25 05:24:18.159312] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.993 [2024-07-25 05:24:18.159376] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.993 [2024-07-25 05:24:18.159390] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.993 [2024-07-25 05:24:18.159400] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.993 [2024-07-25 05:24:18.159409] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.993 [2024-07-25 05:24:18.159498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.251 [2024-07-25 05:24:18.356399] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.251 [2024-07-25 05:24:18.388315] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:14.251 [2024-07-25 05:24:18.388507] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.188 05:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:15.188 05:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:15.188 05:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:15.188 05:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:15.188 05:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.188 05:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.188 05:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=84450 00:18:15.188 05:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 84450 /var/tmp/bdevperf.sock 00:18:15.188 05:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84450 ']' 00:18:15.188 05:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.188 05:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:15.188 05:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:15.188 05:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:18:15.188 "subsystems": [ 00:18:15.188 { 00:18:15.188 "subsystem": "keyring", 00:18:15.188 "config": [ 00:18:15.188 { 00:18:15.188 "method": "keyring_file_add_key", 00:18:15.188 "params": { 00:18:15.188 "name": "key0", 00:18:15.188 "path": "/tmp/tmp.JFjRFTOUGt" 00:18:15.188 } 00:18:15.188 } 00:18:15.188 ] 00:18:15.188 }, 00:18:15.188 { 00:18:15.188 "subsystem": "iobuf", 00:18:15.188 "config": [ 00:18:15.188 { 00:18:15.188 "method": "iobuf_set_options", 00:18:15.188 "params": { 00:18:15.188 "large_bufsize": 135168, 00:18:15.188 "large_pool_count": 1024, 00:18:15.188 "small_bufsize": 8192, 00:18:15.188 "small_pool_count": 8192 00:18:15.188 } 00:18:15.188 } 00:18:15.188 ] 00:18:15.188 }, 00:18:15.188 { 00:18:15.188 "subsystem": "sock", 00:18:15.188 "config": [ 00:18:15.188 { 00:18:15.188 "method": "sock_set_default_impl", 00:18:15.188 "params": { 00:18:15.188 "impl_name": "posix" 00:18:15.188 } 00:18:15.188 }, 00:18:15.188 { 00:18:15.188 "method": "sock_impl_set_options", 00:18:15.188 "params": { 00:18:15.188 "enable_ktls": false, 00:18:15.188 "enable_placement_id": 0, 00:18:15.188 "enable_quickack": false, 00:18:15.188 "enable_recv_pipe": true, 00:18:15.188 "enable_zerocopy_send_client": false, 00:18:15.188 "enable_zerocopy_send_server": true, 00:18:15.188 "impl_name": "ssl", 00:18:15.188 "recv_buf_size": 4096, 00:18:15.188 "send_buf_size": 4096, 00:18:15.188 "tls_version": 0, 00:18:15.188 "zerocopy_threshold": 0 00:18:15.188 } 00:18:15.188 }, 00:18:15.188 { 00:18:15.188 "method": "sock_impl_set_options", 00:18:15.188 "params": { 00:18:15.188 "enable_ktls": false, 00:18:15.188 "enable_placement_id": 0, 00:18:15.188 "enable_quickack": false, 00:18:15.188 "enable_recv_pipe": true, 00:18:15.188 "enable_zerocopy_send_client": false, 00:18:15.188 "enable_zerocopy_send_server": true, 00:18:15.188 "impl_name": "posix", 00:18:15.188 "recv_buf_size": 2097152, 00:18:15.188 "send_buf_size": 2097152, 00:18:15.188 "tls_version": 0, 00:18:15.188 "zerocopy_threshold": 0 00:18:15.188 } 00:18:15.188 } 00:18:15.188 ] 00:18:15.188 }, 00:18:15.188 { 00:18:15.188 "subsystem": "vmd", 00:18:15.188 "config": [] 00:18:15.188 }, 00:18:15.188 { 00:18:15.188 "subsystem": "accel", 00:18:15.188 "config": [ 00:18:15.188 { 00:18:15.188 "method": "accel_set_options", 00:18:15.188 "params": { 00:18:15.188 "buf_count": 2048, 00:18:15.188 "large_cache_size": 16, 00:18:15.188 "sequence_count": 2048, 00:18:15.188 "small_cache_size": 128, 00:18:15.188 "task_count": 2048 00:18:15.188 } 00:18:15.188 } 00:18:15.188 ] 00:18:15.188 }, 00:18:15.188 { 00:18:15.188 "subsystem": "bdev", 00:18:15.188 "config": [ 00:18:15.188 { 00:18:15.188 "method": "bdev_set_options", 00:18:15.188 "params": { 00:18:15.188 "bdev_auto_examine": true, 00:18:15.188 "bdev_io_cache_size": 256, 00:18:15.188 "bdev_io_pool_size": 65535, 00:18:15.188 "iobuf_large_cache_size": 16, 00:18:15.188 "iobuf_small_cache_size": 128 00:18:15.188 } 00:18:15.188 }, 00:18:15.188 { 00:18:15.188 "method": "bdev_raid_set_options", 00:18:15.188 "params": { 00:18:15.188 "process_max_bandwidth_mb_sec": 0, 00:18:15.188 "process_window_size_kb": 1024 00:18:15.188 } 00:18:15.188 }, 00:18:15.188 { 00:18:15.188 "method": "bdev_iscsi_set_options", 00:18:15.188 "params": { 00:18:15.188 "timeout_sec": 30 00:18:15.188 } 00:18:15.188 }, 00:18:15.188 { 00:18:15.188 "method": "bdev_nvme_set_options", 00:18:15.188 "params": { 00:18:15.188 "action_on_timeout": "none", 00:18:15.188 "allow_accel_sequence": false, 00:18:15.188 "arbitration_burst": 0, 00:18:15.188 "bdev_retry_count": 3, 00:18:15.188 "ctrlr_loss_timeout_sec": 0, 00:18:15.188 "delay_cmd_submit": true, 00:18:15.188 "dhchap_dhgroups": [ 00:18:15.188 "null", 00:18:15.188 "ffdhe2048", 00:18:15.188 "ffdhe3072", 00:18:15.188 "ffdhe4096", 00:18:15.188 "ffdhe6144", 00:18:15.188 "ffdhe8192" 00:18:15.188 ], 00:18:15.188 "dhchap_digests": [ 00:18:15.188 "sha256", 00:18:15.188 "sha384", 00:18:15.188 "sha512" 00:18:15.188 ], 00:18:15.188 "disable_auto_failback": false, 00:18:15.188 "fast_io_fail_timeout_sec": 0, 00:18:15.188 "generate_uuids": false, 00:18:15.188 "high_priority_weight": 0, 00:18:15.188 "io_path_stat": false, 00:18:15.188 "io_queue_requests": 512, 00:18:15.188 "keep_alive_timeout_ms": 10000, 00:18:15.188 "low_priority_weight": 0, 00:18:15.188 "medium_priority_weight": 0, 00:18:15.188 "nvme_adminq_poll_period_us": 10000, 00:18:15.188 "nvme_error_stat": false, 00:18:15.188 "nvme_ioq_poll_period_us": 0, 00:18:15.188 "rdma_cm_event_timeout_ms": 0, 00:18:15.188 "rdma_max_cq_size": 0, 00:18:15.188 "rdma_srq_size": 0, 00:18:15.188 "reconnect_delay_sec": 0, 00:18:15.188 "timeout_admin_us": 0, 00:18:15.188 "timeout_us": 0, 00:18:15.188 "transport_ack_timeout": 0, 00:18:15.188 "transport_retry_count": 4, 00:18:15.188 "transport_tos": 0 00:18:15.188 } 00:18:15.188 }, 00:18:15.188 { 00:18:15.188 "method": "bdev_nvme_attach_controller", 00:18:15.188 "params": { 00:18:15.188 "adrfam": "IPv4", 00:18:15.188 "ctrlr_loss_timeout_sec": 0, 00:18:15.188 "ddgst": false, 00:18:15.188 "fast_io_fail_timeout_sec": 0, 00:18:15.188 "hdgst": false, 00:18:15.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.189 "name": "nvme0", 00:18:15.189 "prchk_guard": false, 00:18:15.189 "prchk_reftag": false, 00:18:15.189 "psk": "key0", 00:18:15.189 "reconnect_delay_sec": 0, 00:18:15.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.189 "traddr": "10.0.0.2", 00:18:15.189 "trsvcid": "4420", 00:18:15.189 "trtype": "TCP" 00:18:15.189 } 00:18:15.189 }, 00:18:15.189 { 00:18:15.189 "method": "bdev_nvme_set_hotplug", 00:18:15.189 "params": { 00:18:15.189 "enable": false, 00:18:15.189 "period_us": 100000 00:18:15.189 } 00:18:15.189 }, 00:18:15.189 { 00:18:15.189 "method": "bdev_enable_histogram", 00:18:15.189 "params": { 00:18:15.189 "enable": true, 00:18:15.189 "name": "nvme0n1" 00:18:15.189 } 00:18:15.189 }, 00:18:15.189 { 00:18:15.189 "method": "bdev_wait_for_examine" 00:18:15.189 } 00:18:15.189 ] 00:18:15.189 }, 00:18:15.189 { 00:18:15.189 "subsystem": "nbd", 00:18:15.189 "config": [] 00:18:15.189 } 00:18:15.189 ] 00:18:15.189 }' 00:18:15.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.189 05:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.189 05:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:15.189 05:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.189 [2024-07-25 05:24:19.139446] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:18:15.189 [2024-07-25 05:24:19.139540] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84450 ] 00:18:15.189 [2024-07-25 05:24:19.280551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.189 [2024-07-25 05:24:19.359217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.448 [2024-07-25 05:24:19.493628] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:16.015 05:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:16.015 05:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:16.015 05:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:16.015 05:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:18:16.581 05:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.581 05:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:16.581 Running I/O for 1 seconds... 00:18:17.517 00:18:17.517 Latency(us) 00:18:17.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.517 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:17.517 Verification LBA range: start 0x0 length 0x2000 00:18:17.517 nvme0n1 : 1.02 3879.37 15.15 0.00 0.00 32635.27 6851.49 26810.18 00:18:17.517 =================================================================================================================== 00:18:17.517 Total : 3879.37 15.15 0.00 0.00 32635.27 6851.49 26810.18 00:18:17.517 0 00:18:17.517 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:18:17.775 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:18:17.775 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:17.775 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:18:17.775 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:18:17.775 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:17.775 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:17.775 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:17.775 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:17.775 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:17.775 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:17.775 nvmf_trace.0 00:18:17.775 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:18:17.775 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84450 00:18:17.775 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84450 ']' 00:18:17.775 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84450 00:18:17.775 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:17.776 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:17.776 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84450 00:18:17.776 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:17.776 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:17.776 killing process with pid 84450 00:18:17.776 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84450' 00:18:17.776 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84450 00:18:17.776 Received shutdown signal, test time was about 1.000000 seconds 00:18:17.776 00:18:17.776 Latency(us) 00:18:17.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.776 =================================================================================================================== 00:18:17.776 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.776 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84450 00:18:18.034 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:18.034 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:18.034 05:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:18.034 rmmod nvme_tcp 00:18:18.034 rmmod nvme_fabrics 00:18:18.034 rmmod nvme_keyring 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 84406 ']' 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 84406 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84406 ']' 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84406 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84406 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:18.034 killing process with pid 84406 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84406' 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84406 00:18:18.034 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84406 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.zD8smpJmBb /tmp/tmp.BCQf2O0WAX /tmp/tmp.JFjRFTOUGt 00:18:18.295 00:18:18.295 real 1m22.735s 00:18:18.295 user 2m12.029s 00:18:18.295 sys 0m26.836s 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:18.295 ************************************ 00:18:18.295 END TEST nvmf_tls 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.295 ************************************ 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:18.295 ************************************ 00:18:18.295 START TEST nvmf_fips 00:18:18.295 ************************************ 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:18.295 * Looking for test storage... 00:18:18.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:18.295 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:18.555 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:18.556 Error setting digest 00:18:18.556 0002C544D27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:18.556 0002C544D27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:18.556 Cannot find device "nvmf_tgt_br" 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:18.556 Cannot find device "nvmf_tgt_br2" 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:18.556 Cannot find device "nvmf_tgt_br" 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:18.556 Cannot find device "nvmf_tgt_br2" 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:18:18.556 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:18.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:18.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:18.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:18:18.816 00:18:18.816 --- 10.0.0.2 ping statistics --- 00:18:18.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.816 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:18:18.816 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:18.817 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:18.817 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:18:18.817 00:18:18.817 --- 10.0.0.3 ping statistics --- 00:18:18.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.817 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:18.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:18.817 00:18:18.817 --- 10.0.0.1 ping statistics --- 00:18:18.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.817 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=84739 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 84739 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84739 ']' 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:18.817 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:19.076 [2024-07-25 05:24:23.064055] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:18:19.076 [2024-07-25 05:24:23.064167] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.076 [2024-07-25 05:24:23.199339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.338 [2024-07-25 05:24:23.259361] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.338 [2024-07-25 05:24:23.259418] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.338 [2024-07-25 05:24:23.259431] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.338 [2024-07-25 05:24:23.259440] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.338 [2024-07-25 05:24:23.259448] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.338 [2024-07-25 05:24:23.259474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.906 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:19.906 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:18:19.906 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:19.906 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:19.906 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:20.164 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.164 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:20.164 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:20.164 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:20.164 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:20.164 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:20.164 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:20.164 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:20.164 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:20.164 [2024-07-25 05:24:24.339701] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.423 [2024-07-25 05:24:24.355649] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:20.423 [2024-07-25 05:24:24.355843] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.423 [2024-07-25 05:24:24.383273] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:20.423 malloc0 00:18:20.423 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:20.423 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=84797 00:18:20.423 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:20.423 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 84797 /var/tmp/bdevperf.sock 00:18:20.423 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84797 ']' 00:18:20.423 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.423 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.423 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.423 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.423 05:24:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:20.423 [2024-07-25 05:24:24.476632] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:18:20.423 [2024-07-25 05:24:24.476743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84797 ] 00:18:20.694 [2024-07-25 05:24:24.612302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.694 [2024-07-25 05:24:24.679797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.642 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:21.642 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:18:21.642 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:21.900 [2024-07-25 05:24:25.844526] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:21.900 [2024-07-25 05:24:25.844668] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:21.900 TLSTESTn1 00:18:21.900 05:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:21.900 Running I/O for 10 seconds... 00:18:34.100 00:18:34.100 Latency(us) 00:18:34.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.100 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:34.100 Verification LBA range: start 0x0 length 0x2000 00:18:34.100 TLSTESTn1 : 10.02 3867.02 15.11 0.00 0.00 33038.74 6464.23 30980.65 00:18:34.100 =================================================================================================================== 00:18:34.100 Total : 3867.02 15.11 0.00 0.00 33038.74 6464.23 30980.65 00:18:34.100 0 00:18:34.100 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:34.100 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:34.100 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:18:34.100 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:18:34.100 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:34.101 nvmf_trace.0 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84797 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84797 ']' 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84797 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84797 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:34.101 killing process with pid 84797 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84797' 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84797 00:18:34.101 Received shutdown signal, test time was about 10.000000 seconds 00:18:34.101 00:18:34.101 Latency(us) 00:18:34.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.101 =================================================================================================================== 00:18:34.101 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.101 [2024-07-25 05:24:36.213451] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84797 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:34.101 rmmod nvme_tcp 00:18:34.101 rmmod nvme_fabrics 00:18:34.101 rmmod nvme_keyring 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 84739 ']' 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 84739 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84739 ']' 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84739 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84739 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:34.101 killing process with pid 84739 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84739' 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84739 00:18:34.101 [2024-07-25 05:24:36.487374] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84739 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:34.101 00:18:34.101 real 0m14.371s 00:18:34.101 user 0m19.920s 00:18:34.101 sys 0m5.630s 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:34.101 ************************************ 00:18:34.101 END TEST nvmf_fips 00:18:34.101 ************************************ 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:18:34.101 00:18:34.101 real 6m32.867s 00:18:34.101 user 15m55.180s 00:18:34.101 sys 1m15.865s 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:34.101 05:24:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:34.101 ************************************ 00:18:34.101 END TEST nvmf_target_extra 00:18:34.101 ************************************ 00:18:34.101 05:24:36 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:18:34.101 05:24:36 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:34.101 05:24:36 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:34.101 05:24:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:34.101 ************************************ 00:18:34.101 START TEST nvmf_host 00:18:34.101 ************************************ 00:18:34.101 05:24:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:18:34.101 * Looking for test storage... 00:18:34.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:18:34.101 05:24:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:34.101 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:18:34.101 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.101 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.101 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.101 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.101 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.101 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.101 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.102 ************************************ 00:18:34.102 START TEST nvmf_multicontroller 00:18:34.102 ************************************ 00:18:34.102 05:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:18:34.102 * Looking for test storage... 00:18:34.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.102 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:34.103 Cannot find device "nvmf_tgt_br" 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.103 Cannot find device "nvmf_tgt_br2" 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:34.103 Cannot find device "nvmf_tgt_br" 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:34.103 Cannot find device "nvmf_tgt_br2" 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:34.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:34.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:34.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:18:34.103 00:18:34.103 --- 10.0.0.2 ping statistics --- 00:18:34.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.103 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:34.103 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:34.103 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:18:34.103 00:18:34.103 --- 10.0.0.3 ping statistics --- 00:18:34.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.103 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:34.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:18:34.103 00:18:34.103 --- 10.0.0.1 ping statistics --- 00:18:34.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.103 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.103 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=85195 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 85195 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 85195 ']' 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:34.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.104 [2024-07-25 05:24:37.465169] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:18:34.104 [2024-07-25 05:24:37.465936] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.104 [2024-07-25 05:24:37.609921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:34.104 [2024-07-25 05:24:37.686773] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.104 [2024-07-25 05:24:37.687067] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.104 [2024-07-25 05:24:37.687163] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.104 [2024-07-25 05:24:37.687260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.104 [2024-07-25 05:24:37.687337] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.104 [2024-07-25 05:24:37.687594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.104 [2024-07-25 05:24:37.687664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:34.104 [2024-07-25 05:24:37.687672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.104 [2024-07-25 05:24:37.817889] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.104 Malloc0 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.104 [2024-07-25 05:24:37.872581] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.104 [2024-07-25 05:24:37.880494] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.104 Malloc1 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=85234 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 85234 /var/tmp/bdevperf.sock 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 85234 ']' 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:34.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:34.104 05:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.104 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:34.104 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:18:34.104 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:18:34.104 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.104 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.364 NVMe0n1 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.364 1 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.364 2024/07/25 05:24:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:18:34.364 request: 00:18:34.364 { 00:18:34.364 "method": "bdev_nvme_attach_controller", 00:18:34.364 "params": { 00:18:34.364 "name": "NVMe0", 00:18:34.364 "trtype": "tcp", 00:18:34.364 "traddr": "10.0.0.2", 00:18:34.364 "adrfam": "ipv4", 00:18:34.364 "trsvcid": "4420", 00:18:34.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.364 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:18:34.364 "hostaddr": "10.0.0.2", 00:18:34.364 "hostsvcid": "60000", 00:18:34.364 "prchk_reftag": false, 00:18:34.364 "prchk_guard": false, 00:18:34.364 "hdgst": false, 00:18:34.364 "ddgst": false 00:18:34.364 } 00:18:34.364 } 00:18:34.364 Got JSON-RPC error response 00:18:34.364 GoRPCClient: error on JSON-RPC call 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.364 2024/07/25 05:24:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:18:34.364 request: 00:18:34.364 { 00:18:34.364 "method": "bdev_nvme_attach_controller", 00:18:34.364 "params": { 00:18:34.364 "name": "NVMe0", 00:18:34.364 "trtype": "tcp", 00:18:34.364 "traddr": "10.0.0.2", 00:18:34.364 "adrfam": "ipv4", 00:18:34.364 "trsvcid": "4420", 00:18:34.364 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:34.364 "hostaddr": "10.0.0.2", 00:18:34.364 "hostsvcid": "60000", 00:18:34.364 "prchk_reftag": false, 00:18:34.364 "prchk_guard": false, 00:18:34.364 "hdgst": false, 00:18:34.364 "ddgst": false 00:18:34.364 } 00:18:34.364 } 00:18:34.364 Got JSON-RPC error response 00:18:34.364 GoRPCClient: error on JSON-RPC call 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.364 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.364 2024/07/25 05:24:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:18:34.364 request: 00:18:34.364 { 00:18:34.364 "method": "bdev_nvme_attach_controller", 00:18:34.364 "params": { 00:18:34.364 "name": "NVMe0", 00:18:34.364 "trtype": "tcp", 00:18:34.364 "traddr": "10.0.0.2", 00:18:34.364 "adrfam": "ipv4", 00:18:34.364 "trsvcid": "4420", 00:18:34.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.364 "hostaddr": "10.0.0.2", 00:18:34.364 "hostsvcid": "60000", 00:18:34.364 "prchk_reftag": false, 00:18:34.364 "prchk_guard": false, 00:18:34.364 "hdgst": false, 00:18:34.364 "ddgst": false, 00:18:34.365 "multipath": "disable" 00:18:34.365 } 00:18:34.365 } 00:18:34.365 Got JSON-RPC error response 00:18:34.365 GoRPCClient: error on JSON-RPC call 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.365 2024/07/25 05:24:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:18:34.365 request: 00:18:34.365 { 00:18:34.365 "method": "bdev_nvme_attach_controller", 00:18:34.365 "params": { 00:18:34.365 "name": "NVMe0", 00:18:34.365 "trtype": "tcp", 00:18:34.365 "traddr": "10.0.0.2", 00:18:34.365 "adrfam": "ipv4", 00:18:34.365 "trsvcid": "4420", 00:18:34.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.365 "hostaddr": "10.0.0.2", 00:18:34.365 "hostsvcid": "60000", 00:18:34.365 "prchk_reftag": false, 00:18:34.365 "prchk_guard": false, 00:18:34.365 "hdgst": false, 00:18:34.365 "ddgst": false, 00:18:34.365 "multipath": "failover" 00:18:34.365 } 00:18:34.365 } 00:18:34.365 Got JSON-RPC error response 00:18:34.365 GoRPCClient: error on JSON-RPC call 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.365 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.365 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.624 00:18:34.624 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.624 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:34.624 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.624 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.624 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:18:34.624 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.624 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:18:34.624 05:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:35.560 0 00:18:35.560 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:18:35.560 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.560 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:35.560 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.560 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 85234 00:18:35.560 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 85234 ']' 00:18:35.560 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 85234 00:18:35.560 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:18:35.560 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:35.560 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85234 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85234' 00:18:35.819 killing process with pid 85234 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 85234 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 85234 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:18:35.819 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:18:35.819 [2024-07-25 05:24:37.989196] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:18:35.819 [2024-07-25 05:24:37.989320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85234 ] 00:18:35.819 [2024-07-25 05:24:38.125089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.819 [2024-07-25 05:24:38.183744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.819 [2024-07-25 05:24:38.560353] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 8d0d0e33-ac64-4ee3-9e69-af9aa4b13110 already exists 00:18:35.819 [2024-07-25 05:24:38.560427] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:8d0d0e33-ac64-4ee3-9e69-af9aa4b13110 alias for bdev NVMe1n1 00:18:35.819 [2024-07-25 05:24:38.560446] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:18:35.819 Running I/O for 1 seconds... 00:18:35.819 00:18:35.819 Latency(us) 00:18:35.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.819 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:18:35.819 NVMe0n1 : 1.01 19286.62 75.34 0.00 0.00 6618.05 3351.27 11856.06 00:18:35.819 =================================================================================================================== 00:18:35.819 Total : 19286.62 75.34 0.00 0.00 6618.05 3351.27 11856.06 00:18:35.819 Received shutdown signal, test time was about 1.000000 seconds 00:18:35.819 00:18:35.819 Latency(us) 00:18:35.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.819 =================================================================================================================== 00:18:35.819 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.819 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:35.819 05:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:35.819 rmmod nvme_tcp 00:18:36.078 rmmod nvme_fabrics 00:18:36.078 rmmod nvme_keyring 00:18:36.078 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:36.078 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:18:36.078 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:18:36.078 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 85195 ']' 00:18:36.078 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 85195 00:18:36.078 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 85195 ']' 00:18:36.078 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 85195 00:18:36.078 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:18:36.078 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:36.078 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85195 00:18:36.078 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:36.078 killing process with pid 85195 00:18:36.078 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:36.078 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85195' 00:18:36.078 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 85195 00:18:36.078 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 85195 00:18:36.336 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:36.336 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:36.336 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:36.336 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.336 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:36.337 00:18:36.337 real 0m3.370s 00:18:36.337 user 0m9.824s 00:18:36.337 sys 0m0.887s 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:36.337 ************************************ 00:18:36.337 END TEST nvmf_multicontroller 00:18:36.337 ************************************ 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.337 ************************************ 00:18:36.337 START TEST nvmf_aer 00:18:36.337 ************************************ 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:18:36.337 * Looking for test storage... 00:18:36.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:36.337 Cannot find device "nvmf_tgt_br" 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # true 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:36.337 Cannot find device "nvmf_tgt_br2" 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # true 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:36.337 Cannot find device "nvmf_tgt_br" 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # true 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:36.337 Cannot find device "nvmf_tgt_br2" 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # true 00:18:36.337 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:36.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:36.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:36.595 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:36.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:18:36.596 00:18:36.596 --- 10.0.0.2 ping statistics --- 00:18:36.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.596 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:36.596 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:36.596 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:18:36.596 00:18:36.596 --- 10.0.0.3 ping statistics --- 00:18:36.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.596 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:36.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:18:36.596 00:18:36.596 --- 10.0.0.1 ping statistics --- 00:18:36.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.596 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:36.596 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:36.854 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:18:36.854 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:36.854 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:36.854 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:36.854 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=85448 00:18:36.854 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:36.854 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 85448 00:18:36.854 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 85448 ']' 00:18:36.854 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.854 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:36.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.854 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.854 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:36.854 05:24:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:36.854 [2024-07-25 05:24:40.837237] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:18:36.854 [2024-07-25 05:24:40.837328] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.854 [2024-07-25 05:24:40.974070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:37.113 [2024-07-25 05:24:41.044248] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.113 [2024-07-25 05:24:41.044305] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.113 [2024-07-25 05:24:41.044320] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.113 [2024-07-25 05:24:41.044330] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.113 [2024-07-25 05:24:41.044339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.113 [2024-07-25 05:24:41.044453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.113 [2024-07-25 05:24:41.045996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.113 [2024-07-25 05:24:41.046118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.113 [2024-07-25 05:24:41.046109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.113 [2024-07-25 05:24:41.176357] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.113 Malloc0 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.113 [2024-07-25 05:24:41.236662] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.113 [ 00:18:37.113 { 00:18:37.113 "allow_any_host": true, 00:18:37.113 "hosts": [], 00:18:37.113 "listen_addresses": [], 00:18:37.113 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:37.113 "subtype": "Discovery" 00:18:37.113 }, 00:18:37.113 { 00:18:37.113 "allow_any_host": true, 00:18:37.113 "hosts": [], 00:18:37.113 "listen_addresses": [ 00:18:37.113 { 00:18:37.113 "adrfam": "IPv4", 00:18:37.113 "traddr": "10.0.0.2", 00:18:37.113 "trsvcid": "4420", 00:18:37.113 "trtype": "TCP" 00:18:37.113 } 00:18:37.113 ], 00:18:37.113 "max_cntlid": 65519, 00:18:37.113 "max_namespaces": 2, 00:18:37.113 "min_cntlid": 1, 00:18:37.113 "model_number": "SPDK bdev Controller", 00:18:37.113 "namespaces": [ 00:18:37.113 { 00:18:37.113 "bdev_name": "Malloc0", 00:18:37.113 "name": "Malloc0", 00:18:37.113 "nguid": "D74D1AC874B744D9A41A926CF13CCFE2", 00:18:37.113 "nsid": 1, 00:18:37.113 "uuid": "d74d1ac8-74b7-44d9-a41a-926cf13ccfe2" 00:18:37.113 } 00:18:37.113 ], 00:18:37.113 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.113 "serial_number": "SPDK00000000000001", 00:18:37.113 "subtype": "NVMe" 00:18:37.113 } 00:18:37.113 ] 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=85483 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:18:37.113 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.372 Malloc1 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.372 Asynchronous Event Request test 00:18:37.372 Attaching to 10.0.0.2 00:18:37.372 Attached to 10.0.0.2 00:18:37.372 Registering asynchronous event callbacks... 00:18:37.372 Starting namespace attribute notice tests for all controllers... 00:18:37.372 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:37.372 aer_cb - Changed Namespace 00:18:37.372 Cleaning up... 00:18:37.372 [ 00:18:37.372 { 00:18:37.372 "allow_any_host": true, 00:18:37.372 "hosts": [], 00:18:37.372 "listen_addresses": [], 00:18:37.372 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:37.372 "subtype": "Discovery" 00:18:37.372 }, 00:18:37.372 { 00:18:37.372 "allow_any_host": true, 00:18:37.372 "hosts": [], 00:18:37.372 "listen_addresses": [ 00:18:37.372 { 00:18:37.372 "adrfam": "IPv4", 00:18:37.372 "traddr": "10.0.0.2", 00:18:37.372 "trsvcid": "4420", 00:18:37.372 "trtype": "TCP" 00:18:37.372 } 00:18:37.372 ], 00:18:37.372 "max_cntlid": 65519, 00:18:37.372 "max_namespaces": 2, 00:18:37.372 "min_cntlid": 1, 00:18:37.372 "model_number": "SPDK bdev Controller", 00:18:37.372 "namespaces": [ 00:18:37.372 { 00:18:37.372 "bdev_name": "Malloc0", 00:18:37.372 "name": "Malloc0", 00:18:37.372 "nguid": "D74D1AC874B744D9A41A926CF13CCFE2", 00:18:37.372 "nsid": 1, 00:18:37.372 "uuid": "d74d1ac8-74b7-44d9-a41a-926cf13ccfe2" 00:18:37.372 }, 00:18:37.372 { 00:18:37.372 "bdev_name": "Malloc1", 00:18:37.372 "name": "Malloc1", 00:18:37.372 "nguid": "630749DEF68342D19CA8AF7896A5D0F3", 00:18:37.372 "nsid": 2, 00:18:37.372 "uuid": "630749de-f683-42d1-9ca8-af7896a5d0f3" 00:18:37.372 } 00:18:37.372 ], 00:18:37.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.372 "serial_number": "SPDK00000000000001", 00:18:37.372 "subtype": "NVMe" 00:18:37.372 } 00:18:37.372 ] 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.372 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 85483 00:18:37.373 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:37.373 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.373 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:37.631 rmmod nvme_tcp 00:18:37.631 rmmod nvme_fabrics 00:18:37.631 rmmod nvme_keyring 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 85448 ']' 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 85448 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 85448 ']' 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 85448 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85448 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:37.631 killing process with pid 85448 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85448' 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 85448 00:18:37.631 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 85448 00:18:37.890 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:37.890 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:37.890 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:37.890 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:37.891 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:37.891 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.891 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.891 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.891 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:37.891 00:18:37.891 real 0m1.553s 00:18:37.891 user 0m3.307s 00:18:37.891 sys 0m0.537s 00:18:37.891 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:37.891 05:24:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:37.891 ************************************ 00:18:37.891 END TEST nvmf_aer 00:18:37.891 ************************************ 00:18:37.891 05:24:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:18:37.891 05:24:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:37.891 05:24:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:37.891 05:24:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.891 ************************************ 00:18:37.891 START TEST nvmf_async_init 00:18:37.891 ************************************ 00:18:37.891 05:24:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:18:37.891 * Looking for test storage... 00:18:37.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=df63c009d77c48d98e23b60e591b52bf 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:37.891 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:37.892 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:37.892 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:37.892 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:37.892 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.892 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:37.892 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:37.892 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:37.892 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:37.892 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:37.892 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:37.892 Cannot find device "nvmf_tgt_br" 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:38.150 Cannot find device "nvmf_tgt_br2" 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:38.150 Cannot find device "nvmf_tgt_br" 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:38.150 Cannot find device "nvmf_tgt_br2" 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:38.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:38.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:38.150 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:38.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:38.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:18:38.408 00:18:38.408 --- 10.0.0.2 ping statistics --- 00:18:38.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.408 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:38.408 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:38.408 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:18:38.408 00:18:38.408 --- 10.0.0.3 ping statistics --- 00:18:38.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.408 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:38.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:38.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:18:38.408 00:18:38.408 --- 10.0.0.1 ping statistics --- 00:18:38.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.408 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:38.408 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:38.409 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:38.409 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=85652 00:18:38.409 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 85652 00:18:38.409 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 85652 ']' 00:18:38.409 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.409 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:38.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.409 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.409 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:38.409 05:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:38.409 [2024-07-25 05:24:42.454427] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:18:38.409 [2024-07-25 05:24:42.454528] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.667 [2024-07-25 05:24:42.592498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.667 [2024-07-25 05:24:42.683093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.667 [2024-07-25 05:24:42.683170] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.667 [2024-07-25 05:24:42.683186] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.667 [2024-07-25 05:24:42.683196] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.667 [2024-07-25 05:24:42.683205] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.667 [2024-07-25 05:24:42.683241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.235 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:39.235 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:18:39.235 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:39.235 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:39.235 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:39.493 [2024-07-25 05:24:43.450123] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:39.493 null0 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g df63c009d77c48d98e23b60e591b52bf 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:39.493 [2024-07-25 05:24:43.490223] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.493 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:39.751 nvme0n1 00:18:39.751 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.752 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:39.752 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.752 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:39.752 [ 00:18:39.752 { 00:18:39.752 "aliases": [ 00:18:39.752 "df63c009-d77c-48d9-8e23-b60e591b52bf" 00:18:39.752 ], 00:18:39.752 "assigned_rate_limits": { 00:18:39.752 "r_mbytes_per_sec": 0, 00:18:39.752 "rw_ios_per_sec": 0, 00:18:39.752 "rw_mbytes_per_sec": 0, 00:18:39.752 "w_mbytes_per_sec": 0 00:18:39.752 }, 00:18:39.752 "block_size": 512, 00:18:39.752 "claimed": false, 00:18:39.752 "driver_specific": { 00:18:39.752 "mp_policy": "active_passive", 00:18:39.752 "nvme": [ 00:18:39.752 { 00:18:39.752 "ctrlr_data": { 00:18:39.752 "ana_reporting": false, 00:18:39.752 "cntlid": 1, 00:18:39.752 "firmware_revision": "24.09", 00:18:39.752 "model_number": "SPDK bdev Controller", 00:18:39.752 "multi_ctrlr": true, 00:18:39.752 "oacs": { 00:18:39.752 "firmware": 0, 00:18:39.752 "format": 0, 00:18:39.752 "ns_manage": 0, 00:18:39.752 "security": 0 00:18:39.752 }, 00:18:39.752 "serial_number": "00000000000000000000", 00:18:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:39.752 "vendor_id": "0x8086" 00:18:39.752 }, 00:18:39.752 "ns_data": { 00:18:39.752 "can_share": true, 00:18:39.752 "id": 1 00:18:39.752 }, 00:18:39.752 "trid": { 00:18:39.752 "adrfam": "IPv4", 00:18:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:39.752 "traddr": "10.0.0.2", 00:18:39.752 "trsvcid": "4420", 00:18:39.752 "trtype": "TCP" 00:18:39.752 }, 00:18:39.752 "vs": { 00:18:39.752 "nvme_version": "1.3" 00:18:39.752 } 00:18:39.752 } 00:18:39.752 ] 00:18:39.752 }, 00:18:39.752 "memory_domains": [ 00:18:39.752 { 00:18:39.752 "dma_device_id": "system", 00:18:39.752 "dma_device_type": 1 00:18:39.752 } 00:18:39.752 ], 00:18:39.752 "name": "nvme0n1", 00:18:39.752 "num_blocks": 2097152, 00:18:39.752 "product_name": "NVMe disk", 00:18:39.752 "supported_io_types": { 00:18:39.752 "abort": true, 00:18:39.752 "compare": true, 00:18:39.752 "compare_and_write": true, 00:18:39.752 "copy": true, 00:18:39.752 "flush": true, 00:18:39.752 "get_zone_info": false, 00:18:39.752 "nvme_admin": true, 00:18:39.752 "nvme_io": true, 00:18:39.752 "nvme_io_md": false, 00:18:39.752 "nvme_iov_md": false, 00:18:39.752 "read": true, 00:18:39.752 "reset": true, 00:18:39.752 "seek_data": false, 00:18:39.752 "seek_hole": false, 00:18:39.752 "unmap": false, 00:18:39.752 "write": true, 00:18:39.752 "write_zeroes": true, 00:18:39.752 "zcopy": false, 00:18:39.752 "zone_append": false, 00:18:39.752 "zone_management": false 00:18:39.752 }, 00:18:39.752 "uuid": "df63c009-d77c-48d9-8e23-b60e591b52bf", 00:18:39.752 "zoned": false 00:18:39.752 } 00:18:39.752 ] 00:18:39.752 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.752 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:18:39.752 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.752 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:39.752 [2024-07-25 05:24:43.750830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:39.752 [2024-07-25 05:24:43.750942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd7b00 (9): Bad file descriptor 00:18:39.752 [2024-07-25 05:24:43.893109] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:39.752 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.752 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:39.752 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.752 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:39.752 [ 00:18:39.752 { 00:18:39.752 "aliases": [ 00:18:39.752 "df63c009-d77c-48d9-8e23-b60e591b52bf" 00:18:39.752 ], 00:18:39.752 "assigned_rate_limits": { 00:18:39.752 "r_mbytes_per_sec": 0, 00:18:39.752 "rw_ios_per_sec": 0, 00:18:39.752 "rw_mbytes_per_sec": 0, 00:18:39.752 "w_mbytes_per_sec": 0 00:18:39.752 }, 00:18:39.752 "block_size": 512, 00:18:39.752 "claimed": false, 00:18:39.752 "driver_specific": { 00:18:39.752 "mp_policy": "active_passive", 00:18:39.752 "nvme": [ 00:18:39.752 { 00:18:39.752 "ctrlr_data": { 00:18:39.752 "ana_reporting": false, 00:18:39.752 "cntlid": 2, 00:18:39.752 "firmware_revision": "24.09", 00:18:39.752 "model_number": "SPDK bdev Controller", 00:18:39.752 "multi_ctrlr": true, 00:18:39.752 "oacs": { 00:18:39.752 "firmware": 0, 00:18:39.752 "format": 0, 00:18:39.752 "ns_manage": 0, 00:18:39.752 "security": 0 00:18:39.752 }, 00:18:39.752 "serial_number": "00000000000000000000", 00:18:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:39.752 "vendor_id": "0x8086" 00:18:39.752 }, 00:18:39.752 "ns_data": { 00:18:39.752 "can_share": true, 00:18:39.752 "id": 1 00:18:39.752 }, 00:18:39.752 "trid": { 00:18:39.752 "adrfam": "IPv4", 00:18:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:39.752 "traddr": "10.0.0.2", 00:18:39.752 "trsvcid": "4420", 00:18:39.752 "trtype": "TCP" 00:18:39.752 }, 00:18:39.752 "vs": { 00:18:39.752 "nvme_version": "1.3" 00:18:39.752 } 00:18:39.752 } 00:18:39.752 ] 00:18:39.752 }, 00:18:39.752 "memory_domains": [ 00:18:39.752 { 00:18:39.752 "dma_device_id": "system", 00:18:39.752 "dma_device_type": 1 00:18:39.752 } 00:18:39.752 ], 00:18:39.752 "name": "nvme0n1", 00:18:39.752 "num_blocks": 2097152, 00:18:39.752 "product_name": "NVMe disk", 00:18:39.752 "supported_io_types": { 00:18:39.752 "abort": true, 00:18:39.752 "compare": true, 00:18:39.752 "compare_and_write": true, 00:18:39.752 "copy": true, 00:18:39.752 "flush": true, 00:18:39.752 "get_zone_info": false, 00:18:39.752 "nvme_admin": true, 00:18:39.752 "nvme_io": true, 00:18:39.752 "nvme_io_md": false, 00:18:39.752 "nvme_iov_md": false, 00:18:39.752 "read": true, 00:18:39.752 "reset": true, 00:18:39.752 "seek_data": false, 00:18:39.752 "seek_hole": false, 00:18:39.752 "unmap": false, 00:18:39.752 "write": true, 00:18:39.752 "write_zeroes": true, 00:18:39.752 "zcopy": false, 00:18:39.752 "zone_append": false, 00:18:39.752 "zone_management": false 00:18:39.752 }, 00:18:39.752 "uuid": "df63c009-d77c-48d9-8e23-b60e591b52bf", 00:18:39.752 "zoned": false 00:18:39.752 } 00:18:39.752 ] 00:18:39.752 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.752 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.752 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.752 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:40.010 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.010 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:18:40.010 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.deYXoRIjpW 00:18:40.010 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:40.010 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.deYXoRIjpW 00:18:40.010 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:18:40.010 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.010 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:40.010 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.010 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:18:40.011 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.011 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:40.011 [2024-07-25 05:24:43.955015] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:40.011 [2024-07-25 05:24:43.955192] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:40.011 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.011 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.deYXoRIjpW 00:18:40.011 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.011 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:40.011 [2024-07-25 05:24:43.963007] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:40.011 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.011 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.deYXoRIjpW 00:18:40.011 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.011 05:24:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:40.011 [2024-07-25 05:24:43.971020] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:40.011 [2024-07-25 05:24:43.971093] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:40.011 nvme0n1 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:40.011 [ 00:18:40.011 { 00:18:40.011 "aliases": [ 00:18:40.011 "df63c009-d77c-48d9-8e23-b60e591b52bf" 00:18:40.011 ], 00:18:40.011 "assigned_rate_limits": { 00:18:40.011 "r_mbytes_per_sec": 0, 00:18:40.011 "rw_ios_per_sec": 0, 00:18:40.011 "rw_mbytes_per_sec": 0, 00:18:40.011 "w_mbytes_per_sec": 0 00:18:40.011 }, 00:18:40.011 "block_size": 512, 00:18:40.011 "claimed": false, 00:18:40.011 "driver_specific": { 00:18:40.011 "mp_policy": "active_passive", 00:18:40.011 "nvme": [ 00:18:40.011 { 00:18:40.011 "ctrlr_data": { 00:18:40.011 "ana_reporting": false, 00:18:40.011 "cntlid": 3, 00:18:40.011 "firmware_revision": "24.09", 00:18:40.011 "model_number": "SPDK bdev Controller", 00:18:40.011 "multi_ctrlr": true, 00:18:40.011 "oacs": { 00:18:40.011 "firmware": 0, 00:18:40.011 "format": 0, 00:18:40.011 "ns_manage": 0, 00:18:40.011 "security": 0 00:18:40.011 }, 00:18:40.011 "serial_number": "00000000000000000000", 00:18:40.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:40.011 "vendor_id": "0x8086" 00:18:40.011 }, 00:18:40.011 "ns_data": { 00:18:40.011 "can_share": true, 00:18:40.011 "id": 1 00:18:40.011 }, 00:18:40.011 "trid": { 00:18:40.011 "adrfam": "IPv4", 00:18:40.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:40.011 "traddr": "10.0.0.2", 00:18:40.011 "trsvcid": "4421", 00:18:40.011 "trtype": "TCP" 00:18:40.011 }, 00:18:40.011 "vs": { 00:18:40.011 "nvme_version": "1.3" 00:18:40.011 } 00:18:40.011 } 00:18:40.011 ] 00:18:40.011 }, 00:18:40.011 "memory_domains": [ 00:18:40.011 { 00:18:40.011 "dma_device_id": "system", 00:18:40.011 "dma_device_type": 1 00:18:40.011 } 00:18:40.011 ], 00:18:40.011 "name": "nvme0n1", 00:18:40.011 "num_blocks": 2097152, 00:18:40.011 "product_name": "NVMe disk", 00:18:40.011 "supported_io_types": { 00:18:40.011 "abort": true, 00:18:40.011 "compare": true, 00:18:40.011 "compare_and_write": true, 00:18:40.011 "copy": true, 00:18:40.011 "flush": true, 00:18:40.011 "get_zone_info": false, 00:18:40.011 "nvme_admin": true, 00:18:40.011 "nvme_io": true, 00:18:40.011 "nvme_io_md": false, 00:18:40.011 "nvme_iov_md": false, 00:18:40.011 "read": true, 00:18:40.011 "reset": true, 00:18:40.011 "seek_data": false, 00:18:40.011 "seek_hole": false, 00:18:40.011 "unmap": false, 00:18:40.011 "write": true, 00:18:40.011 "write_zeroes": true, 00:18:40.011 "zcopy": false, 00:18:40.011 "zone_append": false, 00:18:40.011 "zone_management": false 00:18:40.011 }, 00:18:40.011 "uuid": "df63c009-d77c-48d9-8e23-b60e591b52bf", 00:18:40.011 "zoned": false 00:18:40.011 } 00:18:40.011 ] 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.deYXoRIjpW 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:40.011 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:40.011 rmmod nvme_tcp 00:18:40.011 rmmod nvme_fabrics 00:18:40.011 rmmod nvme_keyring 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 85652 ']' 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 85652 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 85652 ']' 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 85652 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85652 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:40.269 killing process with pid 85652 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85652' 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 85652 00:18:40.269 [2024-07-25 05:24:44.231572] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:40.269 [2024-07-25 05:24:44.231626] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 85652 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.269 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:40.529 00:18:40.529 real 0m2.534s 00:18:40.529 user 0m2.415s 00:18:40.529 sys 0m0.555s 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:40.529 ************************************ 00:18:40.529 END TEST nvmf_async_init 00:18:40.529 ************************************ 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.529 ************************************ 00:18:40.529 START TEST dma 00:18:40.529 ************************************ 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:18:40.529 * Looking for test storage... 00:18:40.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:18:40.529 00:18:40.529 real 0m0.099s 00:18:40.529 user 0m0.053s 00:18:40.529 sys 0m0.052s 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:40.529 ************************************ 00:18:40.529 END TEST dma 00:18:40.529 ************************************ 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.529 ************************************ 00:18:40.529 START TEST nvmf_identify 00:18:40.529 ************************************ 00:18:40.529 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:40.789 * Looking for test storage... 00:18:40.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:40.789 Cannot find device "nvmf_tgt_br" 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:40.789 Cannot find device "nvmf_tgt_br2" 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:40.789 Cannot find device "nvmf_tgt_br" 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:18:40.789 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:40.789 Cannot find device "nvmf_tgt_br2" 00:18:40.790 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:18:40.790 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:40.790 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:40.790 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:40.790 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.790 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:18:40.790 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:40.790 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.790 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:18:40.790 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:40.790 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:40.790 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:40.790 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:40.790 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:40.790 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:41.048 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:41.048 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:41.048 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:41.048 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:41.048 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:41.048 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:41.048 05:24:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:41.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:18:41.048 00:18:41.048 --- 10.0.0.2 ping statistics --- 00:18:41.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.048 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:41.048 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:41.048 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:18:41.048 00:18:41.048 --- 10.0.0.3 ping statistics --- 00:18:41.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.048 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:41.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:18:41.048 00:18:41.048 --- 10.0.0.1 ping statistics --- 00:18:41.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.048 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=85923 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 85923 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 85923 ']' 00:18:41.048 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.049 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:41.049 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.049 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:41.049 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:41.049 [2024-07-25 05:24:45.192432] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:18:41.049 [2024-07-25 05:24:45.192553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.306 [2024-07-25 05:24:45.331617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:41.306 [2024-07-25 05:24:45.402533] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.306 [2024-07-25 05:24:45.402595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.306 [2024-07-25 05:24:45.402608] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.306 [2024-07-25 05:24:45.402618] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.306 [2024-07-25 05:24:45.402628] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.306 [2024-07-25 05:24:45.402778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.306 [2024-07-25 05:24:45.402871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.306 [2024-07-25 05:24:45.403259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:41.306 [2024-07-25 05:24:45.403267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:41.563 [2024-07-25 05:24:45.497313] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:41.563 Malloc0 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.563 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:41.564 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.564 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:41.564 [2024-07-25 05:24:45.586270] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.564 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.564 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:41.564 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.564 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:41.564 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.564 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:18:41.564 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.564 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:41.564 [ 00:18:41.564 { 00:18:41.564 "allow_any_host": true, 00:18:41.564 "hosts": [], 00:18:41.564 "listen_addresses": [ 00:18:41.564 { 00:18:41.564 "adrfam": "IPv4", 00:18:41.564 "traddr": "10.0.0.2", 00:18:41.564 "trsvcid": "4420", 00:18:41.564 "trtype": "TCP" 00:18:41.564 } 00:18:41.564 ], 00:18:41.564 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:41.564 "subtype": "Discovery" 00:18:41.564 }, 00:18:41.564 { 00:18:41.564 "allow_any_host": true, 00:18:41.564 "hosts": [], 00:18:41.564 "listen_addresses": [ 00:18:41.564 { 00:18:41.564 "adrfam": "IPv4", 00:18:41.564 "traddr": "10.0.0.2", 00:18:41.564 "trsvcid": "4420", 00:18:41.564 "trtype": "TCP" 00:18:41.564 } 00:18:41.564 ], 00:18:41.564 "max_cntlid": 65519, 00:18:41.564 "max_namespaces": 32, 00:18:41.564 "min_cntlid": 1, 00:18:41.564 "model_number": "SPDK bdev Controller", 00:18:41.564 "namespaces": [ 00:18:41.564 { 00:18:41.564 "bdev_name": "Malloc0", 00:18:41.564 "eui64": "ABCDEF0123456789", 00:18:41.564 "name": "Malloc0", 00:18:41.564 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:18:41.564 "nsid": 1, 00:18:41.564 "uuid": "5cb49e90-4fa6-487c-8dc6-b95a957c6240" 00:18:41.564 } 00:18:41.564 ], 00:18:41.564 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.564 "serial_number": "SPDK00000000000001", 00:18:41.564 "subtype": "NVMe" 00:18:41.564 } 00:18:41.564 ] 00:18:41.564 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.564 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:18:41.564 [2024-07-25 05:24:45.638180] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:18:41.564 [2024-07-25 05:24:45.638238] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85962 ] 00:18:41.824 [2024-07-25 05:24:45.777482] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:18:41.824 [2024-07-25 05:24:45.777566] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:41.824 [2024-07-25 05:24:45.777574] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:41.824 [2024-07-25 05:24:45.777589] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:41.824 [2024-07-25 05:24:45.777600] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:18:41.824 [2024-07-25 05:24:45.777765] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:18:41.824 [2024-07-25 05:24:45.777818] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xdcfa60 0 00:18:41.824 [2024-07-25 05:24:45.781989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:41.824 [2024-07-25 05:24:45.782017] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:41.824 [2024-07-25 05:24:45.782024] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:41.824 [2024-07-25 05:24:45.782028] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:41.824 [2024-07-25 05:24:45.782081] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.824 [2024-07-25 05:24:45.782090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.824 [2024-07-25 05:24:45.782095] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcfa60) 00:18:41.824 [2024-07-25 05:24:45.782113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:41.824 [2024-07-25 05:24:45.782149] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12840, cid 0, qid 0 00:18:41.824 [2024-07-25 05:24:45.789992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.824 [2024-07-25 05:24:45.790024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.824 [2024-07-25 05:24:45.790030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.824 [2024-07-25 05:24:45.790037] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12840) on tqpair=0xdcfa60 00:18:41.824 [2024-07-25 05:24:45.790054] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:41.824 [2024-07-25 05:24:45.790068] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:18:41.824 [2024-07-25 05:24:45.790077] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:18:41.824 [2024-07-25 05:24:45.790099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.824 [2024-07-25 05:24:45.790105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.824 [2024-07-25 05:24:45.790109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcfa60) 00:18:41.824 [2024-07-25 05:24:45.790124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.824 [2024-07-25 05:24:45.790163] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12840, cid 0, qid 0 00:18:41.824 [2024-07-25 05:24:45.790301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.824 [2024-07-25 05:24:45.790309] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.824 [2024-07-25 05:24:45.790313] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.824 [2024-07-25 05:24:45.790318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12840) on tqpair=0xdcfa60 00:18:41.824 [2024-07-25 05:24:45.790324] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:18:41.824 [2024-07-25 05:24:45.790332] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:18:41.824 [2024-07-25 05:24:45.790341] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.824 [2024-07-25 05:24:45.790346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.824 [2024-07-25 05:24:45.790349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcfa60) 00:18:41.824 [2024-07-25 05:24:45.790358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.824 [2024-07-25 05:24:45.790379] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12840, cid 0, qid 0 00:18:41.824 [2024-07-25 05:24:45.790447] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.824 [2024-07-25 05:24:45.790454] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.824 [2024-07-25 05:24:45.790459] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.824 [2024-07-25 05:24:45.790463] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12840) on tqpair=0xdcfa60 00:18:41.824 [2024-07-25 05:24:45.790469] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:18:41.825 [2024-07-25 05:24:45.790479] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:18:41.825 [2024-07-25 05:24:45.790487] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.790492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.790496] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcfa60) 00:18:41.825 [2024-07-25 05:24:45.790503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.825 [2024-07-25 05:24:45.790522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12840, cid 0, qid 0 00:18:41.825 [2024-07-25 05:24:45.790586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.825 [2024-07-25 05:24:45.790593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.825 [2024-07-25 05:24:45.790598] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.790602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12840) on tqpair=0xdcfa60 00:18:41.825 [2024-07-25 05:24:45.790608] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:41.825 [2024-07-25 05:24:45.790619] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.790624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.790628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcfa60) 00:18:41.825 [2024-07-25 05:24:45.790636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.825 [2024-07-25 05:24:45.790654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12840, cid 0, qid 0 00:18:41.825 [2024-07-25 05:24:45.790718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.825 [2024-07-25 05:24:45.790725] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.825 [2024-07-25 05:24:45.790729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.790734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12840) on tqpair=0xdcfa60 00:18:41.825 [2024-07-25 05:24:45.790739] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:18:41.825 [2024-07-25 05:24:45.790745] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:18:41.825 [2024-07-25 05:24:45.790753] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:41.825 [2024-07-25 05:24:45.790860] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:18:41.825 [2024-07-25 05:24:45.790865] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:41.825 [2024-07-25 05:24:45.790886] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.790890] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.790894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcfa60) 00:18:41.825 [2024-07-25 05:24:45.790902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.825 [2024-07-25 05:24:45.790921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12840, cid 0, qid 0 00:18:41.825 [2024-07-25 05:24:45.791022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.825 [2024-07-25 05:24:45.791032] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.825 [2024-07-25 05:24:45.791036] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12840) on tqpair=0xdcfa60 00:18:41.825 [2024-07-25 05:24:45.791046] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:41.825 [2024-07-25 05:24:45.791057] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791062] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcfa60) 00:18:41.825 [2024-07-25 05:24:45.791074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.825 [2024-07-25 05:24:45.791095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12840, cid 0, qid 0 00:18:41.825 [2024-07-25 05:24:45.791171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.825 [2024-07-25 05:24:45.791178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.825 [2024-07-25 05:24:45.791182] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791187] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12840) on tqpair=0xdcfa60 00:18:41.825 [2024-07-25 05:24:45.791192] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:41.825 [2024-07-25 05:24:45.791197] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:18:41.825 [2024-07-25 05:24:45.791206] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:18:41.825 [2024-07-25 05:24:45.791217] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:18:41.825 [2024-07-25 05:24:45.791233] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcfa60) 00:18:41.825 [2024-07-25 05:24:45.791245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.825 [2024-07-25 05:24:45.791265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12840, cid 0, qid 0 00:18:41.825 [2024-07-25 05:24:45.791382] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:41.825 [2024-07-25 05:24:45.791389] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:41.825 [2024-07-25 05:24:45.791394] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791398] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdcfa60): datao=0, datal=4096, cccid=0 00:18:41.825 [2024-07-25 05:24:45.791404] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe12840) on tqpair(0xdcfa60): expected_datao=0, payload_size=4096 00:18:41.825 [2024-07-25 05:24:45.791420] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791429] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791435] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.825 [2024-07-25 05:24:45.791451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.825 [2024-07-25 05:24:45.791455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12840) on tqpair=0xdcfa60 00:18:41.825 [2024-07-25 05:24:45.791469] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:18:41.825 [2024-07-25 05:24:45.791475] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:18:41.825 [2024-07-25 05:24:45.791480] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:18:41.825 [2024-07-25 05:24:45.791490] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:18:41.825 [2024-07-25 05:24:45.791496] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:18:41.825 [2024-07-25 05:24:45.791501] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:18:41.825 [2024-07-25 05:24:45.791511] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:18:41.825 [2024-07-25 05:24:45.791519] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcfa60) 00:18:41.825 [2024-07-25 05:24:45.791536] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:41.825 [2024-07-25 05:24:45.791557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12840, cid 0, qid 0 00:18:41.825 [2024-07-25 05:24:45.791637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.825 [2024-07-25 05:24:45.791644] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.825 [2024-07-25 05:24:45.791648] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12840) on tqpair=0xdcfa60 00:18:41.825 [2024-07-25 05:24:45.791661] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791666] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcfa60) 00:18:41.825 [2024-07-25 05:24:45.791677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.825 [2024-07-25 05:24:45.791684] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791688] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791692] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xdcfa60) 00:18:41.825 [2024-07-25 05:24:45.791698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.825 [2024-07-25 05:24:45.791705] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791709] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xdcfa60) 00:18:41.825 [2024-07-25 05:24:45.791719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.825 [2024-07-25 05:24:45.791726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791730] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.825 [2024-07-25 05:24:45.791734] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.825 [2024-07-25 05:24:45.791740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.826 [2024-07-25 05:24:45.791746] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:18:41.826 [2024-07-25 05:24:45.791755] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:41.826 [2024-07-25 05:24:45.791762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.791766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdcfa60) 00:18:41.826 [2024-07-25 05:24:45.791774] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.826 [2024-07-25 05:24:45.791799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12840, cid 0, qid 0 00:18:41.826 [2024-07-25 05:24:45.791807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe129c0, cid 1, qid 0 00:18:41.826 [2024-07-25 05:24:45.791812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12b40, cid 2, qid 0 00:18:41.826 [2024-07-25 05:24:45.791817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.826 [2024-07-25 05:24:45.791823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12e40, cid 4, qid 0 00:18:41.826 [2024-07-25 05:24:45.791939] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.826 [2024-07-25 05:24:45.791948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.826 [2024-07-25 05:24:45.791970] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.791976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12e40) on tqpair=0xdcfa60 00:18:41.826 [2024-07-25 05:24:45.791982] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:18:41.826 [2024-07-25 05:24:45.791988] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:18:41.826 [2024-07-25 05:24:45.792002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.792007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdcfa60) 00:18:41.826 [2024-07-25 05:24:45.792015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.826 [2024-07-25 05:24:45.792037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12e40, cid 4, qid 0 00:18:41.826 [2024-07-25 05:24:45.792163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:41.826 [2024-07-25 05:24:45.792171] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:41.826 [2024-07-25 05:24:45.792176] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.792180] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdcfa60): datao=0, datal=4096, cccid=4 00:18:41.826 [2024-07-25 05:24:45.792185] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe12e40) on tqpair(0xdcfa60): expected_datao=0, payload_size=4096 00:18:41.826 [2024-07-25 05:24:45.792190] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.792198] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.792202] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.792219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.826 [2024-07-25 05:24:45.792226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.826 [2024-07-25 05:24:45.792230] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.792234] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12e40) on tqpair=0xdcfa60 00:18:41.826 [2024-07-25 05:24:45.792249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:18:41.826 [2024-07-25 05:24:45.792280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.792286] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdcfa60) 00:18:41.826 [2024-07-25 05:24:45.792294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.826 [2024-07-25 05:24:45.792302] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.792306] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.792310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdcfa60) 00:18:41.826 [2024-07-25 05:24:45.792317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.826 [2024-07-25 05:24:45.792344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12e40, cid 4, qid 0 00:18:41.826 [2024-07-25 05:24:45.792352] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12fc0, cid 5, qid 0 00:18:41.826 [2024-07-25 05:24:45.792556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:41.826 [2024-07-25 05:24:45.792573] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:41.826 [2024-07-25 05:24:45.792578] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.792583] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdcfa60): datao=0, datal=1024, cccid=4 00:18:41.826 [2024-07-25 05:24:45.792588] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe12e40) on tqpair(0xdcfa60): expected_datao=0, payload_size=1024 00:18:41.826 [2024-07-25 05:24:45.792593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.792600] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.792605] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.792611] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.826 [2024-07-25 05:24:45.792618] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.826 [2024-07-25 05:24:45.792622] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.792626] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12fc0) on tqpair=0xdcfa60 00:18:41.826 [2024-07-25 05:24:45.833069] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.826 [2024-07-25 05:24:45.833115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.826 [2024-07-25 05:24:45.833122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.833128] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12e40) on tqpair=0xdcfa60 00:18:41.826 [2024-07-25 05:24:45.833156] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.833162] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdcfa60) 00:18:41.826 [2024-07-25 05:24:45.833177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.826 [2024-07-25 05:24:45.833218] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12e40, cid 4, qid 0 00:18:41.826 [2024-07-25 05:24:45.833348] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:41.826 [2024-07-25 05:24:45.833356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:41.826 [2024-07-25 05:24:45.833361] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.833365] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdcfa60): datao=0, datal=3072, cccid=4 00:18:41.826 [2024-07-25 05:24:45.833370] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe12e40) on tqpair(0xdcfa60): expected_datao=0, payload_size=3072 00:18:41.826 [2024-07-25 05:24:45.833376] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.833385] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.833390] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.833400] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.826 [2024-07-25 05:24:45.833406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.826 [2024-07-25 05:24:45.833410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.833415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12e40) on tqpair=0xdcfa60 00:18:41.826 [2024-07-25 05:24:45.833427] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.833432] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdcfa60) 00:18:41.826 [2024-07-25 05:24:45.833440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.826 [2024-07-25 05:24:45.833481] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12e40, cid 4, qid 0 00:18:41.826 [2024-07-25 05:24:45.833555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:41.826 [2024-07-25 05:24:45.833563] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:41.826 [2024-07-25 05:24:45.833567] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.833571] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdcfa60): datao=0, datal=8, cccid=4 00:18:41.826 [2024-07-25 05:24:45.833576] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe12e40) on tqpair(0xdcfa60): expected_datao=0, payload_size=8 00:18:41.826 [2024-07-25 05:24:45.833581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.833589] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:41.826 [2024-07-25 05:24:45.833593] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:41.826 ===================================================== 00:18:41.826 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:41.826 ===================================================== 00:18:41.826 Controller Capabilities/Features 00:18:41.826 ================================ 00:18:41.826 Vendor ID: 0000 00:18:41.826 Subsystem Vendor ID: 0000 00:18:41.826 Serial Number: .................... 00:18:41.826 Model Number: ........................................ 00:18:41.826 Firmware Version: 24.09 00:18:41.826 Recommended Arb Burst: 0 00:18:41.826 IEEE OUI Identifier: 00 00 00 00:18:41.826 Multi-path I/O 00:18:41.826 May have multiple subsystem ports: No 00:18:41.826 May have multiple controllers: No 00:18:41.826 Associated with SR-IOV VF: No 00:18:41.826 Max Data Transfer Size: 131072 00:18:41.826 Max Number of Namespaces: 0 00:18:41.826 Max Number of I/O Queues: 1024 00:18:41.826 NVMe Specification Version (VS): 1.3 00:18:41.826 NVMe Specification Version (Identify): 1.3 00:18:41.826 Maximum Queue Entries: 128 00:18:41.826 Contiguous Queues Required: Yes 00:18:41.826 Arbitration Mechanisms Supported 00:18:41.826 Weighted Round Robin: Not Supported 00:18:41.826 Vendor Specific: Not Supported 00:18:41.826 Reset Timeout: 15000 ms 00:18:41.826 Doorbell Stride: 4 bytes 00:18:41.827 NVM Subsystem Reset: Not Supported 00:18:41.827 Command Sets Supported 00:18:41.827 NVM Command Set: Supported 00:18:41.827 Boot Partition: Not Supported 00:18:41.827 Memory Page Size Minimum: 4096 bytes 00:18:41.827 Memory Page Size Maximum: 4096 bytes 00:18:41.827 Persistent Memory Region: Not Supported 00:18:41.827 Optional Asynchronous Events Supported 00:18:41.827 Namespace Attribute Notices: Not Supported 00:18:41.827 Firmware Activation Notices: Not Supported 00:18:41.827 ANA Change Notices: Not Supported 00:18:41.827 PLE Aggregate Log Change Notices: Not Supported 00:18:41.827 LBA Status Info Alert Notices: Not Supported 00:18:41.827 EGE Aggregate Log Change Notices: Not Supported 00:18:41.827 Normal NVM Subsystem Shutdown event: Not Supported 00:18:41.827 Zone Descriptor Change Notices: Not Supported 00:18:41.827 Discovery Log Change Notices: Supported 00:18:41.827 Controller Attributes 00:18:41.827 128-bit Host Identifier: Not Supported 00:18:41.827 Non-Operational Permissive Mode: Not Supported 00:18:41.827 NVM Sets: Not Supported 00:18:41.827 Read Recovery Levels: Not Supported 00:18:41.827 Endurance Groups: Not Supported 00:18:41.827 Predictable Latency Mode: Not Supported 00:18:41.827 Traffic Based Keep ALive: Not Supported 00:18:41.827 Namespace Granularity: Not Supported 00:18:41.827 SQ Associations: Not Supported 00:18:41.827 UUID List: Not Supported 00:18:41.827 Multi-Domain Subsystem: Not Supported 00:18:41.827 Fixed Capacity Management: Not Supported 00:18:41.827 Variable Capacity Management: Not Supported 00:18:41.827 Delete Endurance Group: Not Supported 00:18:41.827 Delete NVM Set: Not Supported 00:18:41.827 Extended LBA Formats Supported: Not Supported 00:18:41.827 Flexible Data Placement Supported: Not Supported 00:18:41.827 00:18:41.827 Controller Memory Buffer Support 00:18:41.827 ================================ 00:18:41.827 Supported: No 00:18:41.827 00:18:41.827 Persistent Memory Region Support 00:18:41.827 ================================ 00:18:41.827 Supported: No 00:18:41.827 00:18:41.827 Admin Command Set Attributes 00:18:41.827 ============================ 00:18:41.827 Security Send/Receive: Not Supported 00:18:41.827 Format NVM: Not Supported 00:18:41.827 Firmware Activate/Download: Not Supported 00:18:41.827 Namespace Management: Not Supported 00:18:41.827 Device Self-Test: Not Supported 00:18:41.827 Directives: Not Supported 00:18:41.827 NVMe-MI: Not Supported 00:18:41.827 Virtualization Management: Not Supported 00:18:41.827 Doorbell Buffer Config: Not Supported 00:18:41.827 Get LBA Status Capability: Not Supported 00:18:41.827 Command & Feature Lockdown Capability: Not Supported 00:18:41.827 Abort Command Limit: 1 00:18:41.827 Async Event Request Limit: 4 00:18:41.827 Number of Firmware Slots: N/A 00:18:41.827 Firmware Slot 1 Read-Only: N/A 00:18:41.827 Firm[2024-07-25 05:24:45.878007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.827 [2024-07-25 05:24:45.878051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.827 [2024-07-25 05:24:45.878058] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.827 [2024-07-25 05:24:45.878065] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12e40) on tqpair=0xdcfa60 00:18:41.827 ware Activation Without Reset: N/A 00:18:41.827 Multiple Update Detection Support: N/A 00:18:41.827 Firmware Update Granularity: No Information Provided 00:18:41.827 Per-Namespace SMART Log: No 00:18:41.827 Asymmetric Namespace Access Log Page: Not Supported 00:18:41.827 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:41.827 Command Effects Log Page: Not Supported 00:18:41.827 Get Log Page Extended Data: Supported 00:18:41.827 Telemetry Log Pages: Not Supported 00:18:41.827 Persistent Event Log Pages: Not Supported 00:18:41.827 Supported Log Pages Log Page: May Support 00:18:41.827 Commands Supported & Effects Log Page: Not Supported 00:18:41.827 Feature Identifiers & Effects Log Page:May Support 00:18:41.827 NVMe-MI Commands & Effects Log Page: May Support 00:18:41.827 Data Area 4 for Telemetry Log: Not Supported 00:18:41.827 Error Log Page Entries Supported: 128 00:18:41.827 Keep Alive: Not Supported 00:18:41.827 00:18:41.827 NVM Command Set Attributes 00:18:41.827 ========================== 00:18:41.827 Submission Queue Entry Size 00:18:41.827 Max: 1 00:18:41.827 Min: 1 00:18:41.827 Completion Queue Entry Size 00:18:41.827 Max: 1 00:18:41.827 Min: 1 00:18:41.827 Number of Namespaces: 0 00:18:41.827 Compare Command: Not Supported 00:18:41.827 Write Uncorrectable Command: Not Supported 00:18:41.827 Dataset Management Command: Not Supported 00:18:41.827 Write Zeroes Command: Not Supported 00:18:41.827 Set Features Save Field: Not Supported 00:18:41.827 Reservations: Not Supported 00:18:41.827 Timestamp: Not Supported 00:18:41.827 Copy: Not Supported 00:18:41.827 Volatile Write Cache: Not Present 00:18:41.827 Atomic Write Unit (Normal): 1 00:18:41.827 Atomic Write Unit (PFail): 1 00:18:41.827 Atomic Compare & Write Unit: 1 00:18:41.827 Fused Compare & Write: Supported 00:18:41.827 Scatter-Gather List 00:18:41.827 SGL Command Set: Supported 00:18:41.827 SGL Keyed: Supported 00:18:41.827 SGL Bit Bucket Descriptor: Not Supported 00:18:41.827 SGL Metadata Pointer: Not Supported 00:18:41.827 Oversized SGL: Not Supported 00:18:41.827 SGL Metadata Address: Not Supported 00:18:41.827 SGL Offset: Supported 00:18:41.827 Transport SGL Data Block: Not Supported 00:18:41.827 Replay Protected Memory Block: Not Supported 00:18:41.827 00:18:41.827 Firmware Slot Information 00:18:41.827 ========================= 00:18:41.827 Active slot: 0 00:18:41.827 00:18:41.827 00:18:41.827 Error Log 00:18:41.827 ========= 00:18:41.827 00:18:41.827 Active Namespaces 00:18:41.827 ================= 00:18:41.827 Discovery Log Page 00:18:41.827 ================== 00:18:41.827 Generation Counter: 2 00:18:41.827 Number of Records: 2 00:18:41.827 Record Format: 0 00:18:41.827 00:18:41.827 Discovery Log Entry 0 00:18:41.827 ---------------------- 00:18:41.827 Transport Type: 3 (TCP) 00:18:41.827 Address Family: 1 (IPv4) 00:18:41.827 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:41.827 Entry Flags: 00:18:41.827 Duplicate Returned Information: 1 00:18:41.827 Explicit Persistent Connection Support for Discovery: 1 00:18:41.827 Transport Requirements: 00:18:41.827 Secure Channel: Not Required 00:18:41.827 Port ID: 0 (0x0000) 00:18:41.827 Controller ID: 65535 (0xffff) 00:18:41.827 Admin Max SQ Size: 128 00:18:41.827 Transport Service Identifier: 4420 00:18:41.827 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:41.827 Transport Address: 10.0.0.2 00:18:41.827 Discovery Log Entry 1 00:18:41.827 ---------------------- 00:18:41.827 Transport Type: 3 (TCP) 00:18:41.827 Address Family: 1 (IPv4) 00:18:41.827 Subsystem Type: 2 (NVM Subsystem) 00:18:41.827 Entry Flags: 00:18:41.827 Duplicate Returned Information: 0 00:18:41.827 Explicit Persistent Connection Support for Discovery: 0 00:18:41.827 Transport Requirements: 00:18:41.827 Secure Channel: Not Required 00:18:41.827 Port ID: 0 (0x0000) 00:18:41.827 Controller ID: 65535 (0xffff) 00:18:41.827 Admin Max SQ Size: 128 00:18:41.827 Transport Service Identifier: 4420 00:18:41.827 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:18:41.827 Transport Address: 10.0.0.2 [2024-07-25 05:24:45.878200] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:18:41.827 [2024-07-25 05:24:45.878216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12840) on tqpair=0xdcfa60 00:18:41.827 [2024-07-25 05:24:45.878226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.827 [2024-07-25 05:24:45.878232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe129c0) on tqpair=0xdcfa60 00:18:41.827 [2024-07-25 05:24:45.878238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.827 [2024-07-25 05:24:45.878243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12b40) on tqpair=0xdcfa60 00:18:41.827 [2024-07-25 05:24:45.878248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.827 [2024-07-25 05:24:45.878254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.827 [2024-07-25 05:24:45.878259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.827 [2024-07-25 05:24:45.878272] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.878278] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.878282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.828 [2024-07-25 05:24:45.878295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.828 [2024-07-25 05:24:45.878324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.828 [2024-07-25 05:24:45.878402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.828 [2024-07-25 05:24:45.878410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.828 [2024-07-25 05:24:45.878414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.878418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.828 [2024-07-25 05:24:45.878432] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.878437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.878441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.828 [2024-07-25 05:24:45.878449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.828 [2024-07-25 05:24:45.878475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.828 [2024-07-25 05:24:45.878558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.828 [2024-07-25 05:24:45.878565] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.828 [2024-07-25 05:24:45.878570] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.878574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.828 [2024-07-25 05:24:45.878580] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:18:41.828 [2024-07-25 05:24:45.878585] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:18:41.828 [2024-07-25 05:24:45.878596] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.878600] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.878604] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.828 [2024-07-25 05:24:45.878612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.828 [2024-07-25 05:24:45.878632] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.828 [2024-07-25 05:24:45.878688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.828 [2024-07-25 05:24:45.878695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.828 [2024-07-25 05:24:45.878700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.878704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.828 [2024-07-25 05:24:45.878716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.878721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.878725] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.828 [2024-07-25 05:24:45.878734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.828 [2024-07-25 05:24:45.878752] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.828 [2024-07-25 05:24:45.878808] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.828 [2024-07-25 05:24:45.878816] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.828 [2024-07-25 05:24:45.878820] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.878824] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.828 [2024-07-25 05:24:45.878835] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.878840] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.878844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.828 [2024-07-25 05:24:45.878852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.828 [2024-07-25 05:24:45.878870] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.828 [2024-07-25 05:24:45.878925] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.828 [2024-07-25 05:24:45.878932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.828 [2024-07-25 05:24:45.878936] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.878941] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.828 [2024-07-25 05:24:45.878975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.878982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.878986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.828 [2024-07-25 05:24:45.878994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.828 [2024-07-25 05:24:45.879016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.828 [2024-07-25 05:24:45.879074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.828 [2024-07-25 05:24:45.879082] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.828 [2024-07-25 05:24:45.879086] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.828 [2024-07-25 05:24:45.879102] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879107] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879111] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.828 [2024-07-25 05:24:45.879118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.828 [2024-07-25 05:24:45.879136] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.828 [2024-07-25 05:24:45.879190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.828 [2024-07-25 05:24:45.879197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.828 [2024-07-25 05:24:45.879201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.828 [2024-07-25 05:24:45.879216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879221] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879225] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.828 [2024-07-25 05:24:45.879233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.828 [2024-07-25 05:24:45.879251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.828 [2024-07-25 05:24:45.879301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.828 [2024-07-25 05:24:45.879308] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.828 [2024-07-25 05:24:45.879312] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879317] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.828 [2024-07-25 05:24:45.879328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.828 [2024-07-25 05:24:45.879345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.828 [2024-07-25 05:24:45.879363] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.828 [2024-07-25 05:24:45.879416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.828 [2024-07-25 05:24:45.879425] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.828 [2024-07-25 05:24:45.879430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.828 [2024-07-25 05:24:45.879446] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.828 [2024-07-25 05:24:45.879463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.828 [2024-07-25 05:24:45.879484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.828 [2024-07-25 05:24:45.879537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.828 [2024-07-25 05:24:45.879544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.828 [2024-07-25 05:24:45.879548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.828 [2024-07-25 05:24:45.879563] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879568] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879572] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.828 [2024-07-25 05:24:45.879579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.828 [2024-07-25 05:24:45.879597] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.828 [2024-07-25 05:24:45.879653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.828 [2024-07-25 05:24:45.879661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.828 [2024-07-25 05:24:45.879665] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.828 [2024-07-25 05:24:45.879680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.828 [2024-07-25 05:24:45.879689] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.828 [2024-07-25 05:24:45.879697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.829 [2024-07-25 05:24:45.879715] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.829 [2024-07-25 05:24:45.879768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.829 [2024-07-25 05:24:45.879775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.829 [2024-07-25 05:24:45.879779] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.879783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.829 [2024-07-25 05:24:45.879794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.879799] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.879803] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.829 [2024-07-25 05:24:45.879811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.829 [2024-07-25 05:24:45.879828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.829 [2024-07-25 05:24:45.879884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.829 [2024-07-25 05:24:45.879892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.829 [2024-07-25 05:24:45.879896] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.879900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.829 [2024-07-25 05:24:45.879911] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.879916] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.879920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.829 [2024-07-25 05:24:45.879928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.829 [2024-07-25 05:24:45.879946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.829 [2024-07-25 05:24:45.880013] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.829 [2024-07-25 05:24:45.880022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.829 [2024-07-25 05:24:45.880026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880031] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.829 [2024-07-25 05:24:45.880042] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880047] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880051] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.829 [2024-07-25 05:24:45.880059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.829 [2024-07-25 05:24:45.880079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.829 [2024-07-25 05:24:45.880135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.829 [2024-07-25 05:24:45.880142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.829 [2024-07-25 05:24:45.880146] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.829 [2024-07-25 05:24:45.880162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880167] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.829 [2024-07-25 05:24:45.880179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.829 [2024-07-25 05:24:45.880197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.829 [2024-07-25 05:24:45.880250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.829 [2024-07-25 05:24:45.880257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.829 [2024-07-25 05:24:45.880261] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.829 [2024-07-25 05:24:45.880276] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.829 [2024-07-25 05:24:45.880293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.829 [2024-07-25 05:24:45.880310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.829 [2024-07-25 05:24:45.880368] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.829 [2024-07-25 05:24:45.880375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.829 [2024-07-25 05:24:45.880379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.829 [2024-07-25 05:24:45.880394] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880399] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.829 [2024-07-25 05:24:45.880411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.829 [2024-07-25 05:24:45.880429] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.829 [2024-07-25 05:24:45.880485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.829 [2024-07-25 05:24:45.880492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.829 [2024-07-25 05:24:45.880496] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.829 [2024-07-25 05:24:45.880512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880517] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880521] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.829 [2024-07-25 05:24:45.880528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.829 [2024-07-25 05:24:45.880546] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.829 [2024-07-25 05:24:45.880597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.829 [2024-07-25 05:24:45.880604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.829 [2024-07-25 05:24:45.880608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.829 [2024-07-25 05:24:45.880623] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880628] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880632] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.829 [2024-07-25 05:24:45.880640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.829 [2024-07-25 05:24:45.880658] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.829 [2024-07-25 05:24:45.880711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.829 [2024-07-25 05:24:45.880718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.829 [2024-07-25 05:24:45.880722] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880727] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.829 [2024-07-25 05:24:45.880738] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880743] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880747] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.829 [2024-07-25 05:24:45.880755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.829 [2024-07-25 05:24:45.880772] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.829 [2024-07-25 05:24:45.880826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.829 [2024-07-25 05:24:45.880834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.829 [2024-07-25 05:24:45.880838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.829 [2024-07-25 05:24:45.880842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.829 [2024-07-25 05:24:45.880853] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.880858] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.880862] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.830 [2024-07-25 05:24:45.880870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.830 [2024-07-25 05:24:45.880889] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.830 [2024-07-25 05:24:45.880945] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.830 [2024-07-25 05:24:45.880962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.830 [2024-07-25 05:24:45.880967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.880972] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.830 [2024-07-25 05:24:45.880983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.880988] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.880992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.830 [2024-07-25 05:24:45.881000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.830 [2024-07-25 05:24:45.881020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.830 [2024-07-25 05:24:45.881078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.830 [2024-07-25 05:24:45.881085] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.830 [2024-07-25 05:24:45.881089] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881094] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.830 [2024-07-25 05:24:45.881105] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.830 [2024-07-25 05:24:45.881121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.830 [2024-07-25 05:24:45.881140] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.830 [2024-07-25 05:24:45.881194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.830 [2024-07-25 05:24:45.881201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.830 [2024-07-25 05:24:45.881205] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881209] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.830 [2024-07-25 05:24:45.881220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881229] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.830 [2024-07-25 05:24:45.881237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.830 [2024-07-25 05:24:45.881255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.830 [2024-07-25 05:24:45.881311] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.830 [2024-07-25 05:24:45.881318] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.830 [2024-07-25 05:24:45.881323] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881327] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.830 [2024-07-25 05:24:45.881338] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881347] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.830 [2024-07-25 05:24:45.881355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.830 [2024-07-25 05:24:45.881373] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.830 [2024-07-25 05:24:45.881426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.830 [2024-07-25 05:24:45.881433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.830 [2024-07-25 05:24:45.881438] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881442] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.830 [2024-07-25 05:24:45.881453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881462] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.830 [2024-07-25 05:24:45.881469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.830 [2024-07-25 05:24:45.881487] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.830 [2024-07-25 05:24:45.881541] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.830 [2024-07-25 05:24:45.881548] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.830 [2024-07-25 05:24:45.881552] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.830 [2024-07-25 05:24:45.881567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881576] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.830 [2024-07-25 05:24:45.881584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.830 [2024-07-25 05:24:45.881602] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.830 [2024-07-25 05:24:45.881655] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.830 [2024-07-25 05:24:45.881662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.830 [2024-07-25 05:24:45.881666] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.830 [2024-07-25 05:24:45.881681] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881687] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881691] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.830 [2024-07-25 05:24:45.881698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.830 [2024-07-25 05:24:45.881716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.830 [2024-07-25 05:24:45.881773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.830 [2024-07-25 05:24:45.881780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.830 [2024-07-25 05:24:45.881784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881788] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.830 [2024-07-25 05:24:45.881799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881804] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881808] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.830 [2024-07-25 05:24:45.881815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.830 [2024-07-25 05:24:45.881834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.830 [2024-07-25 05:24:45.881890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.830 [2024-07-25 05:24:45.881897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.830 [2024-07-25 05:24:45.881901] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.830 [2024-07-25 05:24:45.881922] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.881931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.830 [2024-07-25 05:24:45.881939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.830 [2024-07-25 05:24:45.885964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.830 [2024-07-25 05:24:45.885987] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.830 [2024-07-25 05:24:45.885995] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.830 [2024-07-25 05:24:45.885999] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.886004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.830 [2024-07-25 05:24:45.886019] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.886024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.886029] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcfa60) 00:18:41.830 [2024-07-25 05:24:45.886038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.830 [2024-07-25 05:24:45.886065] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe12cc0, cid 3, qid 0 00:18:41.830 [2024-07-25 05:24:45.886126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:41.830 [2024-07-25 05:24:45.886133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:41.830 [2024-07-25 05:24:45.886137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:41.830 [2024-07-25 05:24:45.886142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe12cc0) on tqpair=0xdcfa60 00:18:41.830 [2024-07-25 05:24:45.886151] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:18:41.830 00:18:41.830 05:24:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:18:41.830 [2024-07-25 05:24:45.923902] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:18:41.831 [2024-07-25 05:24:45.923983] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85970 ] 00:18:42.089 [2024-07-25 05:24:46.061313] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:18:42.089 [2024-07-25 05:24:46.061397] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:42.089 [2024-07-25 05:24:46.061406] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:42.089 [2024-07-25 05:24:46.061420] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:42.089 [2024-07-25 05:24:46.061431] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:18:42.089 [2024-07-25 05:24:46.061582] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:18:42.090 [2024-07-25 05:24:46.061632] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xfa5a60 0 00:18:42.090 [2024-07-25 05:24:46.065977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:42.090 [2024-07-25 05:24:46.066001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:42.090 [2024-07-25 05:24:46.066007] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:42.090 [2024-07-25 05:24:46.066012] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:42.090 [2024-07-25 05:24:46.066067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.066076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.066080] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa5a60) 00:18:42.090 [2024-07-25 05:24:46.066095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:42.090 [2024-07-25 05:24:46.066127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8840, cid 0, qid 0 00:18:42.090 [2024-07-25 05:24:46.073974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.090 [2024-07-25 05:24:46.073997] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.090 [2024-07-25 05:24:46.074003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074008] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8840) on tqpair=0xfa5a60 00:18:42.090 [2024-07-25 05:24:46.074023] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:42.090 [2024-07-25 05:24:46.074033] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:18:42.090 [2024-07-25 05:24:46.074040] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:18:42.090 [2024-07-25 05:24:46.074058] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074063] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa5a60) 00:18:42.090 [2024-07-25 05:24:46.074078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.090 [2024-07-25 05:24:46.074107] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8840, cid 0, qid 0 00:18:42.090 [2024-07-25 05:24:46.074181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.090 [2024-07-25 05:24:46.074188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.090 [2024-07-25 05:24:46.074192] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8840) on tqpair=0xfa5a60 00:18:42.090 [2024-07-25 05:24:46.074203] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:18:42.090 [2024-07-25 05:24:46.074211] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:18:42.090 [2024-07-25 05:24:46.074220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa5a60) 00:18:42.090 [2024-07-25 05:24:46.074238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.090 [2024-07-25 05:24:46.074259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8840, cid 0, qid 0 00:18:42.090 [2024-07-25 05:24:46.074314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.090 [2024-07-25 05:24:46.074321] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.090 [2024-07-25 05:24:46.074325] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074329] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8840) on tqpair=0xfa5a60 00:18:42.090 [2024-07-25 05:24:46.074336] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:18:42.090 [2024-07-25 05:24:46.074345] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:18:42.090 [2024-07-25 05:24:46.074354] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074369] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074373] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa5a60) 00:18:42.090 [2024-07-25 05:24:46.074380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.090 [2024-07-25 05:24:46.074400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8840, cid 0, qid 0 00:18:42.090 [2024-07-25 05:24:46.074456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.090 [2024-07-25 05:24:46.074463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.090 [2024-07-25 05:24:46.074467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8840) on tqpair=0xfa5a60 00:18:42.090 [2024-07-25 05:24:46.074478] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:42.090 [2024-07-25 05:24:46.074489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa5a60) 00:18:42.090 [2024-07-25 05:24:46.074507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.090 [2024-07-25 05:24:46.074526] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8840, cid 0, qid 0 00:18:42.090 [2024-07-25 05:24:46.074578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.090 [2024-07-25 05:24:46.074585] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.090 [2024-07-25 05:24:46.074589] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074593] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8840) on tqpair=0xfa5a60 00:18:42.090 [2024-07-25 05:24:46.074599] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:18:42.090 [2024-07-25 05:24:46.074604] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:18:42.090 [2024-07-25 05:24:46.074613] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:42.090 [2024-07-25 05:24:46.074720] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:18:42.090 [2024-07-25 05:24:46.074733] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:42.090 [2024-07-25 05:24:46.074744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074749] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074753] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa5a60) 00:18:42.090 [2024-07-25 05:24:46.074761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.090 [2024-07-25 05:24:46.074782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8840, cid 0, qid 0 00:18:42.090 [2024-07-25 05:24:46.074839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.090 [2024-07-25 05:24:46.074847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.090 [2024-07-25 05:24:46.074851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8840) on tqpair=0xfa5a60 00:18:42.090 [2024-07-25 05:24:46.074861] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:42.090 [2024-07-25 05:24:46.074872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074878] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.074882] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa5a60) 00:18:42.090 [2024-07-25 05:24:46.074889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.090 [2024-07-25 05:24:46.074908] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8840, cid 0, qid 0 00:18:42.090 [2024-07-25 05:24:46.074983] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.090 [2024-07-25 05:24:46.074993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.090 [2024-07-25 05:24:46.074997] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.075001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8840) on tqpair=0xfa5a60 00:18:42.090 [2024-07-25 05:24:46.075007] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:42.090 [2024-07-25 05:24:46.075012] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:18:42.090 [2024-07-25 05:24:46.075021] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:18:42.090 [2024-07-25 05:24:46.075033] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:18:42.090 [2024-07-25 05:24:46.075044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.075049] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa5a60) 00:18:42.090 [2024-07-25 05:24:46.075057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.090 [2024-07-25 05:24:46.075080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8840, cid 0, qid 0 00:18:42.090 [2024-07-25 05:24:46.075176] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:42.090 [2024-07-25 05:24:46.075189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:42.090 [2024-07-25 05:24:46.075194] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.075198] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfa5a60): datao=0, datal=4096, cccid=0 00:18:42.090 [2024-07-25 05:24:46.075204] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfe8840) on tqpair(0xfa5a60): expected_datao=0, payload_size=4096 00:18:42.090 [2024-07-25 05:24:46.075209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.075218] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.075223] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:42.090 [2024-07-25 05:24:46.075232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.091 [2024-07-25 05:24:46.075239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.091 [2024-07-25 05:24:46.075243] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8840) on tqpair=0xfa5a60 00:18:42.091 [2024-07-25 05:24:46.075257] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:18:42.091 [2024-07-25 05:24:46.075262] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:18:42.091 [2024-07-25 05:24:46.075267] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:18:42.091 [2024-07-25 05:24:46.075277] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:18:42.091 [2024-07-25 05:24:46.075282] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:18:42.091 [2024-07-25 05:24:46.075288] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:18:42.091 [2024-07-25 05:24:46.075298] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:18:42.091 [2024-07-25 05:24:46.075307] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075316] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa5a60) 00:18:42.091 [2024-07-25 05:24:46.075324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.091 [2024-07-25 05:24:46.075346] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8840, cid 0, qid 0 00:18:42.091 [2024-07-25 05:24:46.075413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.091 [2024-07-25 05:24:46.075420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.091 [2024-07-25 05:24:46.075424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8840) on tqpair=0xfa5a60 00:18:42.091 [2024-07-25 05:24:46.075438] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075443] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa5a60) 00:18:42.091 [2024-07-25 05:24:46.075454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.091 [2024-07-25 05:24:46.075461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075469] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xfa5a60) 00:18:42.091 [2024-07-25 05:24:46.075475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.091 [2024-07-25 05:24:46.075486] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xfa5a60) 00:18:42.091 [2024-07-25 05:24:46.075500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.091 [2024-07-25 05:24:46.075507] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075515] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.091 [2024-07-25 05:24:46.075521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.091 [2024-07-25 05:24:46.075527] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:42.091 [2024-07-25 05:24:46.075536] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:42.091 [2024-07-25 05:24:46.075544] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075548] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfa5a60) 00:18:42.091 [2024-07-25 05:24:46.075556] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.091 [2024-07-25 05:24:46.075583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8840, cid 0, qid 0 00:18:42.091 [2024-07-25 05:24:46.075591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe89c0, cid 1, qid 0 00:18:42.091 [2024-07-25 05:24:46.075596] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8b40, cid 2, qid 0 00:18:42.091 [2024-07-25 05:24:46.075601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.091 [2024-07-25 05:24:46.075606] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8e40, cid 4, qid 0 00:18:42.091 [2024-07-25 05:24:46.075700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.091 [2024-07-25 05:24:46.075712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.091 [2024-07-25 05:24:46.075717] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075721] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8e40) on tqpair=0xfa5a60 00:18:42.091 [2024-07-25 05:24:46.075727] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:18:42.091 [2024-07-25 05:24:46.075733] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:42.091 [2024-07-25 05:24:46.075742] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:18:42.091 [2024-07-25 05:24:46.075750] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:42.091 [2024-07-25 05:24:46.075757] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfa5a60) 00:18:42.091 [2024-07-25 05:24:46.075774] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.091 [2024-07-25 05:24:46.075795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8e40, cid 4, qid 0 00:18:42.091 [2024-07-25 05:24:46.075852] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.091 [2024-07-25 05:24:46.075859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.091 [2024-07-25 05:24:46.075863] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075868] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8e40) on tqpair=0xfa5a60 00:18:42.091 [2024-07-25 05:24:46.075937] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:18:42.091 [2024-07-25 05:24:46.075969] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:42.091 [2024-07-25 05:24:46.075981] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.075986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfa5a60) 00:18:42.091 [2024-07-25 05:24:46.075994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.091 [2024-07-25 05:24:46.076016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8e40, cid 4, qid 0 00:18:42.091 [2024-07-25 05:24:46.076092] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:42.091 [2024-07-25 05:24:46.076099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:42.091 [2024-07-25 05:24:46.076104] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.076108] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfa5a60): datao=0, datal=4096, cccid=4 00:18:42.091 [2024-07-25 05:24:46.076113] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfe8e40) on tqpair(0xfa5a60): expected_datao=0, payload_size=4096 00:18:42.091 [2024-07-25 05:24:46.076118] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.076126] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.076130] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.076139] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.091 [2024-07-25 05:24:46.076146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.091 [2024-07-25 05:24:46.076149] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.076154] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8e40) on tqpair=0xfa5a60 00:18:42.091 [2024-07-25 05:24:46.076165] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:18:42.091 [2024-07-25 05:24:46.076178] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:18:42.091 [2024-07-25 05:24:46.076189] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:18:42.091 [2024-07-25 05:24:46.076197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.076202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfa5a60) 00:18:42.091 [2024-07-25 05:24:46.076210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.091 [2024-07-25 05:24:46.076231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8e40, cid 4, qid 0 00:18:42.091 [2024-07-25 05:24:46.076312] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:42.091 [2024-07-25 05:24:46.076319] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:42.091 [2024-07-25 05:24:46.076323] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.076327] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfa5a60): datao=0, datal=4096, cccid=4 00:18:42.091 [2024-07-25 05:24:46.076332] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfe8e40) on tqpair(0xfa5a60): expected_datao=0, payload_size=4096 00:18:42.091 [2024-07-25 05:24:46.076337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.076345] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.076349] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:42.091 [2024-07-25 05:24:46.076358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.091 [2024-07-25 05:24:46.076364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.092 [2024-07-25 05:24:46.076368] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.076373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8e40) on tqpair=0xfa5a60 00:18:42.092 [2024-07-25 05:24:46.076389] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:42.092 [2024-07-25 05:24:46.076401] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:42.092 [2024-07-25 05:24:46.076410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.076414] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfa5a60) 00:18:42.092 [2024-07-25 05:24:46.076421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.092 [2024-07-25 05:24:46.076442] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8e40, cid 4, qid 0 00:18:42.092 [2024-07-25 05:24:46.076507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:42.092 [2024-07-25 05:24:46.076514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:42.092 [2024-07-25 05:24:46.076518] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.076522] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfa5a60): datao=0, datal=4096, cccid=4 00:18:42.092 [2024-07-25 05:24:46.076527] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfe8e40) on tqpair(0xfa5a60): expected_datao=0, payload_size=4096 00:18:42.092 [2024-07-25 05:24:46.076532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.076540] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.076544] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.076553] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.092 [2024-07-25 05:24:46.076559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.092 [2024-07-25 05:24:46.076563] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.076568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8e40) on tqpair=0xfa5a60 00:18:42.092 [2024-07-25 05:24:46.076577] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:42.092 [2024-07-25 05:24:46.076586] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:18:42.092 [2024-07-25 05:24:46.076597] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:18:42.092 [2024-07-25 05:24:46.076604] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:42.092 [2024-07-25 05:24:46.076609] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:42.092 [2024-07-25 05:24:46.076615] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:18:42.092 [2024-07-25 05:24:46.076621] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:18:42.092 [2024-07-25 05:24:46.076627] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:18:42.092 [2024-07-25 05:24:46.076632] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:18:42.092 [2024-07-25 05:24:46.076650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.076656] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfa5a60) 00:18:42.092 [2024-07-25 05:24:46.076664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.092 [2024-07-25 05:24:46.076671] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.076676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.076679] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfa5a60) 00:18:42.092 [2024-07-25 05:24:46.076686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.092 [2024-07-25 05:24:46.076712] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8e40, cid 4, qid 0 00:18:42.092 [2024-07-25 05:24:46.076721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8fc0, cid 5, qid 0 00:18:42.092 [2024-07-25 05:24:46.076796] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.092 [2024-07-25 05:24:46.076803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.092 [2024-07-25 05:24:46.076807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.076812] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8e40) on tqpair=0xfa5a60 00:18:42.092 [2024-07-25 05:24:46.076819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.092 [2024-07-25 05:24:46.076826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.092 [2024-07-25 05:24:46.076830] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.076834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8fc0) on tqpair=0xfa5a60 00:18:42.092 [2024-07-25 05:24:46.076845] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.076850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfa5a60) 00:18:42.092 [2024-07-25 05:24:46.076858] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.092 [2024-07-25 05:24:46.076877] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8fc0, cid 5, qid 0 00:18:42.092 [2024-07-25 05:24:46.076934] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.092 [2024-07-25 05:24:46.076942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.092 [2024-07-25 05:24:46.076946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.076950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8fc0) on tqpair=0xfa5a60 00:18:42.092 [2024-07-25 05:24:46.076976] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.076981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfa5a60) 00:18:42.092 [2024-07-25 05:24:46.076989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.092 [2024-07-25 05:24:46.077010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8fc0, cid 5, qid 0 00:18:42.092 [2024-07-25 05:24:46.077069] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.092 [2024-07-25 05:24:46.077076] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.092 [2024-07-25 05:24:46.077080] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.077084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8fc0) on tqpair=0xfa5a60 00:18:42.092 [2024-07-25 05:24:46.077096] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.077100] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfa5a60) 00:18:42.092 [2024-07-25 05:24:46.077108] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.092 [2024-07-25 05:24:46.077126] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8fc0, cid 5, qid 0 00:18:42.092 [2024-07-25 05:24:46.077183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.092 [2024-07-25 05:24:46.077196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.092 [2024-07-25 05:24:46.077201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.077205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8fc0) on tqpair=0xfa5a60 00:18:42.092 [2024-07-25 05:24:46.077226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.077232] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfa5a60) 00:18:42.092 [2024-07-25 05:24:46.077240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.092 [2024-07-25 05:24:46.077249] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.077253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfa5a60) 00:18:42.092 [2024-07-25 05:24:46.077260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.092 [2024-07-25 05:24:46.077268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.077273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xfa5a60) 00:18:42.092 [2024-07-25 05:24:46.077279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.092 [2024-07-25 05:24:46.077288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.077292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xfa5a60) 00:18:42.092 [2024-07-25 05:24:46.077299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.092 [2024-07-25 05:24:46.077321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8fc0, cid 5, qid 0 00:18:42.092 [2024-07-25 05:24:46.077328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8e40, cid 4, qid 0 00:18:42.092 [2024-07-25 05:24:46.077334] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe9140, cid 6, qid 0 00:18:42.092 [2024-07-25 05:24:46.077339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe92c0, cid 7, qid 0 00:18:42.092 [2024-07-25 05:24:46.077479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:42.092 [2024-07-25 05:24:46.077487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:42.092 [2024-07-25 05:24:46.077491] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.077495] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfa5a60): datao=0, datal=8192, cccid=5 00:18:42.092 [2024-07-25 05:24:46.077500] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfe8fc0) on tqpair(0xfa5a60): expected_datao=0, payload_size=8192 00:18:42.092 [2024-07-25 05:24:46.077505] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.077523] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.077528] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.077534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:42.092 [2024-07-25 05:24:46.077540] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:42.092 [2024-07-25 05:24:46.077544] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:42.092 [2024-07-25 05:24:46.077548] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfa5a60): datao=0, datal=512, cccid=4 00:18:42.092 [2024-07-25 05:24:46.077553] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfe8e40) on tqpair(0xfa5a60): expected_datao=0, payload_size=512 00:18:42.093 [2024-07-25 05:24:46.077558] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.093 [2024-07-25 05:24:46.077565] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:42.093 [2024-07-25 05:24:46.077568] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:42.093 [2024-07-25 05:24:46.077575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:42.093 [2024-07-25 05:24:46.077581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:42.093 [2024-07-25 05:24:46.077585] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:42.093 [2024-07-25 05:24:46.077589] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfa5a60): datao=0, datal=512, cccid=6 00:18:42.093 [2024-07-25 05:24:46.077593] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfe9140) on tqpair(0xfa5a60): expected_datao=0, payload_size=512 00:18:42.093 [2024-07-25 05:24:46.077598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.093 [2024-07-25 05:24:46.077605] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:42.093 [2024-07-25 05:24:46.077609] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:42.093 [2024-07-25 05:24:46.077615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:42.093 [2024-07-25 05:24:46.077621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:42.093 [2024-07-25 05:24:46.077625] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:42.093 [2024-07-25 05:24:46.077629] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfa5a60): datao=0, datal=4096, cccid=7 00:18:42.093 [2024-07-25 05:24:46.077634] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfe92c0) on tqpair(0xfa5a60): expected_datao=0, payload_size=4096 00:18:42.093 [2024-07-25 05:24:46.077639] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.093 [2024-07-25 05:24:46.077646] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:42.093 [2024-07-25 05:24:46.077650] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:42.093 [2024-07-25 05:24:46.077659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.093 [2024-07-25 05:24:46.077665] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.093 [2024-07-25 05:24:46.077669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.093 [2024-07-25 05:24:46.077673] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8fc0) on tqpair=0xfa5a60 00:18:42.093 [2024-07-25 05:24:46.077691] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.093 [2024-07-25 05:24:46.077698] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.093 [2024-07-25 05:24:46.077702] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.093 [2024-07-25 05:24:46.077706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8e40) on tqpair=0xfa5a60 00:18:42.093 [2024-07-25 05:24:46.077719] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.093 [2024-07-25 05:24:46.077726] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.093 [2024-07-25 05:24:46.077730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.093 [2024-07-25 05:24:46.077734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe9140) on tqpair=0xfa5a60 00:18:42.093 [2024-07-25 05:24:46.077747] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.093 [2024-07-25 05:24:46.077753] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.093 ===================================================== 00:18:42.093 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:42.093 ===================================================== 00:18:42.093 Controller Capabilities/Features 00:18:42.093 ================================ 00:18:42.093 Vendor ID: 8086 00:18:42.093 Subsystem Vendor ID: 8086 00:18:42.093 Serial Number: SPDK00000000000001 00:18:42.093 Model Number: SPDK bdev Controller 00:18:42.093 Firmware Version: 24.09 00:18:42.093 Recommended Arb Burst: 6 00:18:42.093 IEEE OUI Identifier: e4 d2 5c 00:18:42.093 Multi-path I/O 00:18:42.093 May have multiple subsystem ports: Yes 00:18:42.093 May have multiple controllers: Yes 00:18:42.093 Associated with SR-IOV VF: No 00:18:42.093 Max Data Transfer Size: 131072 00:18:42.093 Max Number of Namespaces: 32 00:18:42.093 Max Number of I/O Queues: 127 00:18:42.093 NVMe Specification Version (VS): 1.3 00:18:42.093 NVMe Specification Version (Identify): 1.3 00:18:42.093 Maximum Queue Entries: 128 00:18:42.093 Contiguous Queues Required: Yes 00:18:42.093 Arbitration Mechanisms Supported 00:18:42.093 Weighted Round Robin: Not Supported 00:18:42.093 Vendor Specific: Not Supported 00:18:42.093 Reset Timeout: 15000 ms 00:18:42.093 Doorbell Stride: 4 bytes 00:18:42.093 NVM Subsystem Reset: Not Supported 00:18:42.093 Command Sets Supported 00:18:42.093 NVM Command Set: Supported 00:18:42.093 Boot Partition: Not Supported 00:18:42.093 Memory Page Size Minimum: 4096 bytes 00:18:42.093 Memory Page Size Maximum: 4096 bytes 00:18:42.093 Persistent Memory Region: Not Supported 00:18:42.093 Optional Asynchronous Events Supported 00:18:42.093 Namespace Attribute Notices: Supported 00:18:42.093 Firmware Activation Notices: Not Supported 00:18:42.093 ANA Change Notices: Not Supported 00:18:42.093 PLE Aggregate Log Change Notices: Not Supported 00:18:42.093 LBA Status Info Alert Notices: Not Supported 00:18:42.093 EGE Aggregate Log Change Notices: Not Supported 00:18:42.093 Normal NVM Subsystem Shutdown event: Not Supported 00:18:42.093 Zone Descriptor Change Notices: Not Supported 00:18:42.093 Discovery Log Change Notices: Not Supported 00:18:42.093 Controller Attributes 00:18:42.093 128-bit Host Identifier: Supported 00:18:42.093 Non-Operational Permissive Mode: Not Supported 00:18:42.093 NVM Sets: Not Supported 00:18:42.093 Read Recovery Levels: Not Supported 00:18:42.093 Endurance Groups: Not Supported 00:18:42.093 Predictable Latency Mode: Not Supported 00:18:42.093 Traffic Based Keep ALive: Not Supported 00:18:42.093 Namespace Granularity: Not Supported 00:18:42.093 SQ Associations: Not Supported 00:18:42.093 UUID List: Not Supported 00:18:42.093 Multi-Domain Subsystem: Not Supported 00:18:42.093 Fixed Capacity Management: Not Supported 00:18:42.093 Variable Capacity Management: Not Supported 00:18:42.093 Delete Endurance Group: Not Supported 00:18:42.093 Delete NVM Set: Not Supported 00:18:42.093 Extended LBA Formats Supported: Not Supported 00:18:42.093 Flexible Data Placement Supported: Not Supported 00:18:42.093 00:18:42.093 Controller Memory Buffer Support 00:18:42.093 ================================ 00:18:42.093 Supported: No 00:18:42.093 00:18:42.093 Persistent Memory Region Support 00:18:42.093 ================================ 00:18:42.093 Supported: No 00:18:42.093 00:18:42.093 Admin Command Set Attributes 00:18:42.093 ============================ 00:18:42.093 Security Send/Receive: Not Supported 00:18:42.093 Format NVM: Not Supported 00:18:42.093 Firmware Activate/Download: Not Supported 00:18:42.093 Namespace Management: Not Supported 00:18:42.093 Device Self-Test: Not Supported 00:18:42.093 Directives: Not Supported 00:18:42.093 NVMe-MI: Not Supported 00:18:42.093 Virtualization Management: Not Supported 00:18:42.093 Doorbell Buffer Config: Not Supported 00:18:42.093 Get LBA Status Capability: Not Supported 00:18:42.093 Command & Feature Lockdown Capability: Not Supported 00:18:42.093 Abort Command Limit: 4 00:18:42.093 Async Event Request Limit: 4 00:18:42.093 Number of Firmware Slots: N/A 00:18:42.093 Firmware Slot 1 Read-Only: N/A 00:18:42.093 Firmware Activation Without Reset: [2024-07-25 05:24:46.077757] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.093 [2024-07-25 05:24:46.077761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe92c0) on tqpair=0xfa5a60 00:18:42.093 N/A 00:18:42.093 Multiple Update Detection Support: N/A 00:18:42.093 Firmware Update Granularity: No Information Provided 00:18:42.093 Per-Namespace SMART Log: No 00:18:42.093 Asymmetric Namespace Access Log Page: Not Supported 00:18:42.093 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:18:42.093 Command Effects Log Page: Supported 00:18:42.093 Get Log Page Extended Data: Supported 00:18:42.093 Telemetry Log Pages: Not Supported 00:18:42.093 Persistent Event Log Pages: Not Supported 00:18:42.093 Supported Log Pages Log Page: May Support 00:18:42.093 Commands Supported & Effects Log Page: Not Supported 00:18:42.093 Feature Identifiers & Effects Log Page:May Support 00:18:42.093 NVMe-MI Commands & Effects Log Page: May Support 00:18:42.093 Data Area 4 for Telemetry Log: Not Supported 00:18:42.093 Error Log Page Entries Supported: 128 00:18:42.093 Keep Alive: Supported 00:18:42.093 Keep Alive Granularity: 10000 ms 00:18:42.093 00:18:42.093 NVM Command Set Attributes 00:18:42.093 ========================== 00:18:42.093 Submission Queue Entry Size 00:18:42.093 Max: 64 00:18:42.093 Min: 64 00:18:42.093 Completion Queue Entry Size 00:18:42.093 Max: 16 00:18:42.093 Min: 16 00:18:42.093 Number of Namespaces: 32 00:18:42.093 Compare Command: Supported 00:18:42.093 Write Uncorrectable Command: Not Supported 00:18:42.093 Dataset Management Command: Supported 00:18:42.093 Write Zeroes Command: Supported 00:18:42.093 Set Features Save Field: Not Supported 00:18:42.093 Reservations: Supported 00:18:42.093 Timestamp: Not Supported 00:18:42.093 Copy: Supported 00:18:42.093 Volatile Write Cache: Present 00:18:42.093 Atomic Write Unit (Normal): 1 00:18:42.093 Atomic Write Unit (PFail): 1 00:18:42.093 Atomic Compare & Write Unit: 1 00:18:42.093 Fused Compare & Write: Supported 00:18:42.093 Scatter-Gather List 00:18:42.093 SGL Command Set: Supported 00:18:42.094 SGL Keyed: Supported 00:18:42.094 SGL Bit Bucket Descriptor: Not Supported 00:18:42.094 SGL Metadata Pointer: Not Supported 00:18:42.094 Oversized SGL: Not Supported 00:18:42.094 SGL Metadata Address: Not Supported 00:18:42.094 SGL Offset: Supported 00:18:42.094 Transport SGL Data Block: Not Supported 00:18:42.094 Replay Protected Memory Block: Not Supported 00:18:42.094 00:18:42.094 Firmware Slot Information 00:18:42.094 ========================= 00:18:42.094 Active slot: 1 00:18:42.094 Slot 1 Firmware Revision: 24.09 00:18:42.094 00:18:42.094 00:18:42.094 Commands Supported and Effects 00:18:42.094 ============================== 00:18:42.094 Admin Commands 00:18:42.094 -------------- 00:18:42.094 Get Log Page (02h): Supported 00:18:42.094 Identify (06h): Supported 00:18:42.094 Abort (08h): Supported 00:18:42.094 Set Features (09h): Supported 00:18:42.094 Get Features (0Ah): Supported 00:18:42.094 Asynchronous Event Request (0Ch): Supported 00:18:42.094 Keep Alive (18h): Supported 00:18:42.094 I/O Commands 00:18:42.094 ------------ 00:18:42.094 Flush (00h): Supported LBA-Change 00:18:42.094 Write (01h): Supported LBA-Change 00:18:42.094 Read (02h): Supported 00:18:42.094 Compare (05h): Supported 00:18:42.094 Write Zeroes (08h): Supported LBA-Change 00:18:42.094 Dataset Management (09h): Supported LBA-Change 00:18:42.094 Copy (19h): Supported LBA-Change 00:18:42.094 00:18:42.094 Error Log 00:18:42.094 ========= 00:18:42.094 00:18:42.094 Arbitration 00:18:42.094 =========== 00:18:42.094 Arbitration Burst: 1 00:18:42.094 00:18:42.094 Power Management 00:18:42.094 ================ 00:18:42.094 Number of Power States: 1 00:18:42.094 Current Power State: Power State #0 00:18:42.094 Power State #0: 00:18:42.094 Max Power: 0.00 W 00:18:42.094 Non-Operational State: Operational 00:18:42.094 Entry Latency: Not Reported 00:18:42.094 Exit Latency: Not Reported 00:18:42.094 Relative Read Throughput: 0 00:18:42.094 Relative Read Latency: 0 00:18:42.094 Relative Write Throughput: 0 00:18:42.094 Relative Write Latency: 0 00:18:42.094 Idle Power: Not Reported 00:18:42.094 Active Power: Not Reported 00:18:42.094 Non-Operational Permissive Mode: Not Supported 00:18:42.094 00:18:42.094 Health Information 00:18:42.094 ================== 00:18:42.094 Critical Warnings: 00:18:42.094 Available Spare Space: OK 00:18:42.094 Temperature: OK 00:18:42.094 Device Reliability: OK 00:18:42.094 Read Only: No 00:18:42.094 Volatile Memory Backup: OK 00:18:42.094 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:42.094 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:42.094 Available Spare: 0% 00:18:42.094 Available Spare Threshold: 0% 00:18:42.094 Life Percentage Used:[2024-07-25 05:24:46.077871] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.094 [2024-07-25 05:24:46.077878] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xfa5a60) 00:18:42.094 [2024-07-25 05:24:46.077886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.094 [2024-07-25 05:24:46.077910] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe92c0, cid 7, qid 0 00:18:42.094 [2024-07-25 05:24:46.081970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.094 [2024-07-25 05:24:46.081991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.094 [2024-07-25 05:24:46.081996] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.094 [2024-07-25 05:24:46.082001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe92c0) on tqpair=0xfa5a60 00:18:42.094 [2024-07-25 05:24:46.082072] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:18:42.094 [2024-07-25 05:24:46.082091] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8840) on tqpair=0xfa5a60 00:18:42.094 [2024-07-25 05:24:46.082099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.094 [2024-07-25 05:24:46.082106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe89c0) on tqpair=0xfa5a60 00:18:42.094 [2024-07-25 05:24:46.082111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.094 [2024-07-25 05:24:46.082117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8b40) on tqpair=0xfa5a60 00:18:42.094 [2024-07-25 05:24:46.082122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.094 [2024-07-25 05:24:46.082127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.094 [2024-07-25 05:24:46.082132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.094 [2024-07-25 05:24:46.082142] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.094 [2024-07-25 05:24:46.082147] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.094 [2024-07-25 05:24:46.082152] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.094 [2024-07-25 05:24:46.082161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.094 [2024-07-25 05:24:46.082193] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.094 [2024-07-25 05:24:46.082263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.094 [2024-07-25 05:24:46.082270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.094 [2024-07-25 05:24:46.082275] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.094 [2024-07-25 05:24:46.082279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.094 [2024-07-25 05:24:46.082295] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.094 [2024-07-25 05:24:46.082299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.094 [2024-07-25 05:24:46.082303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.094 [2024-07-25 05:24:46.082311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.094 [2024-07-25 05:24:46.082335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.094 [2024-07-25 05:24:46.082408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.094 [2024-07-25 05:24:46.082416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.094 [2024-07-25 05:24:46.082420] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.094 [2024-07-25 05:24:46.082424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.094 [2024-07-25 05:24:46.082430] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:18:42.094 [2024-07-25 05:24:46.082435] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:18:42.094 [2024-07-25 05:24:46.082446] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.094 [2024-07-25 05:24:46.082452] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.094 [2024-07-25 05:24:46.082456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.094 [2024-07-25 05:24:46.082463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.094 [2024-07-25 05:24:46.082482] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.094 [2024-07-25 05:24:46.082536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.094 [2024-07-25 05:24:46.082543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.094 [2024-07-25 05:24:46.082547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.094 [2024-07-25 05:24:46.082551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.094 [2024-07-25 05:24:46.082563] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.094 [2024-07-25 05:24:46.082569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.094 [2024-07-25 05:24:46.082573] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.094 [2024-07-25 05:24:46.082580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.095 [2024-07-25 05:24:46.082599] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.095 [2024-07-25 05:24:46.082652] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.095 [2024-07-25 05:24:46.082659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.095 [2024-07-25 05:24:46.082663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.082667] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.095 [2024-07-25 05:24:46.082678] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.082683] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.082688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.095 [2024-07-25 05:24:46.082695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.095 [2024-07-25 05:24:46.082714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.095 [2024-07-25 05:24:46.082764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.095 [2024-07-25 05:24:46.082772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.095 [2024-07-25 05:24:46.082776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.082780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.095 [2024-07-25 05:24:46.082791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.082796] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.082800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.095 [2024-07-25 05:24:46.082808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.095 [2024-07-25 05:24:46.082827] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.095 [2024-07-25 05:24:46.082883] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.095 [2024-07-25 05:24:46.082890] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.095 [2024-07-25 05:24:46.082895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.082899] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.095 [2024-07-25 05:24:46.082910] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.082915] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.082919] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.095 [2024-07-25 05:24:46.082927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.095 [2024-07-25 05:24:46.082967] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.095 [2024-07-25 05:24:46.083027] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.095 [2024-07-25 05:24:46.083034] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.095 [2024-07-25 05:24:46.083038] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083043] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.095 [2024-07-25 05:24:46.083054] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083059] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083063] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.095 [2024-07-25 05:24:46.083071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.095 [2024-07-25 05:24:46.083093] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.095 [2024-07-25 05:24:46.083150] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.095 [2024-07-25 05:24:46.083157] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.095 [2024-07-25 05:24:46.083161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083165] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.095 [2024-07-25 05:24:46.083177] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083182] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083186] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.095 [2024-07-25 05:24:46.083193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.095 [2024-07-25 05:24:46.083212] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.095 [2024-07-25 05:24:46.083269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.095 [2024-07-25 05:24:46.083276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.095 [2024-07-25 05:24:46.083280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.095 [2024-07-25 05:24:46.083295] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.095 [2024-07-25 05:24:46.083312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.095 [2024-07-25 05:24:46.083331] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.095 [2024-07-25 05:24:46.083387] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.095 [2024-07-25 05:24:46.083405] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.095 [2024-07-25 05:24:46.083411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.095 [2024-07-25 05:24:46.083428] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083433] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.095 [2024-07-25 05:24:46.083445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.095 [2024-07-25 05:24:46.083465] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.095 [2024-07-25 05:24:46.083516] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.095 [2024-07-25 05:24:46.083523] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.095 [2024-07-25 05:24:46.083527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.095 [2024-07-25 05:24:46.083542] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.095 [2024-07-25 05:24:46.083559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.095 [2024-07-25 05:24:46.083578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.095 [2024-07-25 05:24:46.083628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.095 [2024-07-25 05:24:46.083635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.095 [2024-07-25 05:24:46.083639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.095 [2024-07-25 05:24:46.083655] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083660] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083664] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.095 [2024-07-25 05:24:46.083671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.095 [2024-07-25 05:24:46.083690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.095 [2024-07-25 05:24:46.083747] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.095 [2024-07-25 05:24:46.083754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.095 [2024-07-25 05:24:46.083758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083762] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.095 [2024-07-25 05:24:46.083773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.095 [2024-07-25 05:24:46.083790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.095 [2024-07-25 05:24:46.083809] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.095 [2024-07-25 05:24:46.083866] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.095 [2024-07-25 05:24:46.083873] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.095 [2024-07-25 05:24:46.083877] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083881] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.095 [2024-07-25 05:24:46.083892] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083898] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.083902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.095 [2024-07-25 05:24:46.083909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.095 [2024-07-25 05:24:46.083928] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.095 [2024-07-25 05:24:46.083995] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.095 [2024-07-25 05:24:46.084013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.095 [2024-07-25 05:24:46.084018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.084023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.095 [2024-07-25 05:24:46.084035] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.084041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.095 [2024-07-25 05:24:46.084045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.096 [2024-07-25 05:24:46.084053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.096 [2024-07-25 05:24:46.084074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.096 [2024-07-25 05:24:46.084132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.096 [2024-07-25 05:24:46.084139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.096 [2024-07-25 05:24:46.084143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.096 [2024-07-25 05:24:46.084158] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084164] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.096 [2024-07-25 05:24:46.084176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.096 [2024-07-25 05:24:46.084195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.096 [2024-07-25 05:24:46.084245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.096 [2024-07-25 05:24:46.084252] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.096 [2024-07-25 05:24:46.084256] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084260] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.096 [2024-07-25 05:24:46.084271] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084277] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.096 [2024-07-25 05:24:46.084289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.096 [2024-07-25 05:24:46.084307] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.096 [2024-07-25 05:24:46.084358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.096 [2024-07-25 05:24:46.084365] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.096 [2024-07-25 05:24:46.084369] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084374] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.096 [2024-07-25 05:24:46.084385] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084394] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.096 [2024-07-25 05:24:46.084401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.096 [2024-07-25 05:24:46.084420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.096 [2024-07-25 05:24:46.084473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.096 [2024-07-25 05:24:46.084480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.096 [2024-07-25 05:24:46.084484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.096 [2024-07-25 05:24:46.084500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.096 [2024-07-25 05:24:46.084517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.096 [2024-07-25 05:24:46.084535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.096 [2024-07-25 05:24:46.084589] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.096 [2024-07-25 05:24:46.084596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.096 [2024-07-25 05:24:46.084600] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084605] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.096 [2024-07-25 05:24:46.084616] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084621] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.096 [2024-07-25 05:24:46.084633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.096 [2024-07-25 05:24:46.084651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.096 [2024-07-25 05:24:46.084705] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.096 [2024-07-25 05:24:46.084717] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.096 [2024-07-25 05:24:46.084721] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084726] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.096 [2024-07-25 05:24:46.084738] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084743] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084747] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.096 [2024-07-25 05:24:46.084755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.096 [2024-07-25 05:24:46.084775] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.096 [2024-07-25 05:24:46.084826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.096 [2024-07-25 05:24:46.084837] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.096 [2024-07-25 05:24:46.084841] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084846] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.096 [2024-07-25 05:24:46.084858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.084867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.096 [2024-07-25 05:24:46.084875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.096 [2024-07-25 05:24:46.084894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.096 [2024-07-25 05:24:46.084946] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.096 [2024-07-25 05:24:46.088966] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.096 [2024-07-25 05:24:46.088985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.088991] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.096 [2024-07-25 05:24:46.089008] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.089014] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.089018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa5a60) 00:18:42.096 [2024-07-25 05:24:46.089027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.096 [2024-07-25 05:24:46.089056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe8cc0, cid 3, qid 0 00:18:42.096 [2024-07-25 05:24:46.089118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:42.096 [2024-07-25 05:24:46.089126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:42.096 [2024-07-25 05:24:46.089130] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:42.096 [2024-07-25 05:24:46.089134] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe8cc0) on tqpair=0xfa5a60 00:18:42.096 [2024-07-25 05:24:46.089143] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:18:42.096 0% 00:18:42.096 Data Units Read: 0 00:18:42.096 Data Units Written: 0 00:18:42.096 Host Read Commands: 0 00:18:42.096 Host Write Commands: 0 00:18:42.096 Controller Busy Time: 0 minutes 00:18:42.096 Power Cycles: 0 00:18:42.096 Power On Hours: 0 hours 00:18:42.096 Unsafe Shutdowns: 0 00:18:42.096 Unrecoverable Media Errors: 0 00:18:42.096 Lifetime Error Log Entries: 0 00:18:42.096 Warning Temperature Time: 0 minutes 00:18:42.096 Critical Temperature Time: 0 minutes 00:18:42.096 00:18:42.096 Number of Queues 00:18:42.096 ================ 00:18:42.096 Number of I/O Submission Queues: 127 00:18:42.096 Number of I/O Completion Queues: 127 00:18:42.096 00:18:42.096 Active Namespaces 00:18:42.096 ================= 00:18:42.096 Namespace ID:1 00:18:42.096 Error Recovery Timeout: Unlimited 00:18:42.096 Command Set Identifier: NVM (00h) 00:18:42.096 Deallocate: Supported 00:18:42.096 Deallocated/Unwritten Error: Not Supported 00:18:42.096 Deallocated Read Value: Unknown 00:18:42.096 Deallocate in Write Zeroes: Not Supported 00:18:42.096 Deallocated Guard Field: 0xFFFF 00:18:42.096 Flush: Supported 00:18:42.096 Reservation: Supported 00:18:42.096 Namespace Sharing Capabilities: Multiple Controllers 00:18:42.096 Size (in LBAs): 131072 (0GiB) 00:18:42.096 Capacity (in LBAs): 131072 (0GiB) 00:18:42.096 Utilization (in LBAs): 131072 (0GiB) 00:18:42.096 NGUID: ABCDEF0123456789ABCDEF0123456789 00:18:42.096 EUI64: ABCDEF0123456789 00:18:42.096 UUID: 5cb49e90-4fa6-487c-8dc6-b95a957c6240 00:18:42.096 Thin Provisioning: Not Supported 00:18:42.096 Per-NS Atomic Units: Yes 00:18:42.096 Atomic Boundary Size (Normal): 0 00:18:42.096 Atomic Boundary Size (PFail): 0 00:18:42.096 Atomic Boundary Offset: 0 00:18:42.096 Maximum Single Source Range Length: 65535 00:18:42.096 Maximum Copy Length: 65535 00:18:42.096 Maximum Source Range Count: 1 00:18:42.096 NGUID/EUI64 Never Reused: No 00:18:42.096 Namespace Write Protected: No 00:18:42.096 Number of LBA Formats: 1 00:18:42.096 Current LBA Format: LBA Format #00 00:18:42.096 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:42.097 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:42.097 rmmod nvme_tcp 00:18:42.097 rmmod nvme_fabrics 00:18:42.097 rmmod nvme_keyring 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 85923 ']' 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 85923 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 85923 ']' 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 85923 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85923 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:42.097 killing process with pid 85923 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85923' 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 85923 00:18:42.097 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 85923 00:18:42.355 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:42.355 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:42.355 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:42.355 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.355 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:42.355 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.355 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.355 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.355 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:42.355 ************************************ 00:18:42.355 END TEST nvmf_identify 00:18:42.355 ************************************ 00:18:42.355 00:18:42.355 real 0m1.794s 00:18:42.355 user 0m4.155s 00:18:42.355 sys 0m0.552s 00:18:42.355 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:42.355 05:24:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:42.355 05:24:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:42.355 05:24:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:42.355 05:24:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:42.355 05:24:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.355 ************************************ 00:18:42.355 START TEST nvmf_perf 00:18:42.355 ************************************ 00:18:42.355 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:42.635 * Looking for test storage... 00:18:42.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:42.635 Cannot find device "nvmf_tgt_br" 00:18:42.635 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:42.636 Cannot find device "nvmf_tgt_br2" 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:42.636 Cannot find device "nvmf_tgt_br" 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:42.636 Cannot find device "nvmf_tgt_br2" 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:42.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:42.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:42.636 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:42.894 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:42.894 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:42.894 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:42.894 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:42.894 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:42.894 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:42.894 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:42.894 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:42.894 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:42.894 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:42.894 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:42.894 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:42.894 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:42.894 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:42.894 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:42.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:18:42.895 00:18:42.895 --- 10.0.0.2 ping statistics --- 00:18:42.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.895 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:42.895 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:42.895 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:18:42.895 00:18:42.895 --- 10.0.0.3 ping statistics --- 00:18:42.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.895 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:42.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:18:42.895 00:18:42.895 --- 10.0.0.1 ping statistics --- 00:18:42.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.895 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=86139 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 86139 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 86139 ']' 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:42.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:42.895 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:42.895 [2024-07-25 05:24:47.042755] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:18:42.895 [2024-07-25 05:24:47.042865] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.154 [2024-07-25 05:24:47.186302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:43.154 [2024-07-25 05:24:47.260577] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.154 [2024-07-25 05:24:47.260664] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.154 [2024-07-25 05:24:47.260679] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.154 [2024-07-25 05:24:47.260688] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.154 [2024-07-25 05:24:47.260697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.154 [2024-07-25 05:24:47.261043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.154 [2024-07-25 05:24:47.261107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.154 [2024-07-25 05:24:47.261237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.154 [2024-07-25 05:24:47.261229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.090 05:24:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.090 05:24:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:18:44.090 05:24:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:44.090 05:24:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.090 05:24:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:44.090 05:24:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.090 05:24:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:44.090 05:24:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:18:44.348 05:24:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:18:44.348 05:24:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:18:44.606 05:24:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:18:44.606 05:24:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:45.173 05:24:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:45.173 05:24:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:18:45.173 05:24:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:45.173 05:24:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:18:45.173 05:24:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:45.173 [2024-07-25 05:24:49.334219] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.431 05:24:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:45.689 05:24:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:45.689 05:24:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:45.948 05:24:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:45.948 05:24:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:46.206 05:24:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:46.463 [2024-07-25 05:24:50.439550] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.463 05:24:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:46.722 05:24:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:46.722 05:24:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:46.722 05:24:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:46.722 05:24:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:47.656 Initializing NVMe Controllers 00:18:47.656 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:47.656 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:47.656 Initialization complete. Launching workers. 00:18:47.656 ======================================================== 00:18:47.656 Latency(us) 00:18:47.656 Device Information : IOPS MiB/s Average min max 00:18:47.656 PCIE (0000:00:10.0) NSID 1 from core 0: 25664.00 100.25 1246.62 295.26 5150.37 00:18:47.656 ======================================================== 00:18:47.656 Total : 25664.00 100.25 1246.62 295.26 5150.37 00:18:47.656 00:18:47.914 05:24:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:49.289 Initializing NVMe Controllers 00:18:49.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:49.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:49.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:49.289 Initialization complete. Launching workers. 00:18:49.289 ======================================================== 00:18:49.289 Latency(us) 00:18:49.289 Device Information : IOPS MiB/s Average min max 00:18:49.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3072.90 12.00 323.79 114.31 5124.54 00:18:49.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8128.45 7046.28 12048.81 00:18:49.289 ======================================================== 00:18:49.289 Total : 3196.89 12.49 626.51 114.31 12048.81 00:18:49.289 00:18:49.289 05:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:50.660 Initializing NVMe Controllers 00:18:50.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:50.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:50.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:50.660 Initialization complete. Launching workers. 00:18:50.660 ======================================================== 00:18:50.660 Latency(us) 00:18:50.660 Device Information : IOPS MiB/s Average min max 00:18:50.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8582.12 33.52 3728.82 760.89 9805.54 00:18:50.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2703.78 10.56 11962.28 5686.77 23892.52 00:18:50.660 ======================================================== 00:18:50.660 Total : 11285.90 44.09 5701.32 760.89 23892.52 00:18:50.660 00:18:50.660 05:24:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:18:50.660 05:24:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:53.189 Initializing NVMe Controllers 00:18:53.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:53.189 Controller IO queue size 128, less than required. 00:18:53.189 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:53.189 Controller IO queue size 128, less than required. 00:18:53.189 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:53.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:53.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:53.189 Initialization complete. Launching workers. 00:18:53.189 ======================================================== 00:18:53.189 Latency(us) 00:18:53.189 Device Information : IOPS MiB/s Average min max 00:18:53.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 868.42 217.10 152887.63 69907.83 279587.89 00:18:53.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 386.96 96.74 353444.48 111676.54 745387.36 00:18:53.189 ======================================================== 00:18:53.189 Total : 1255.38 313.85 214708.02 69907.83 745387.36 00:18:53.189 00:18:53.189 05:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:18:53.447 Initializing NVMe Controllers 00:18:53.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:53.447 Controller IO queue size 128, less than required. 00:18:53.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:53.447 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:53.447 Controller IO queue size 128, less than required. 00:18:53.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:53.447 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:53.447 WARNING: Some requested NVMe devices were skipped 00:18:53.447 No valid NVMe controllers or AIO or URING devices found 00:18:53.447 05:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:18:55.977 Initializing NVMe Controllers 00:18:55.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:55.977 Controller IO queue size 128, less than required. 00:18:55.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:55.977 Controller IO queue size 128, less than required. 00:18:55.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:55.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:55.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:55.977 Initialization complete. Launching workers. 00:18:55.977 00:18:55.977 ==================== 00:18:55.977 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:55.977 TCP transport: 00:18:55.977 polls: 7216 00:18:55.977 idle_polls: 4231 00:18:55.977 sock_completions: 2985 00:18:55.977 nvme_completions: 5855 00:18:55.977 submitted_requests: 8794 00:18:55.977 queued_requests: 1 00:18:55.977 00:18:55.977 ==================== 00:18:55.977 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:55.977 TCP transport: 00:18:55.977 polls: 9741 00:18:55.977 idle_polls: 6769 00:18:55.977 sock_completions: 2972 00:18:55.977 nvme_completions: 5931 00:18:55.977 submitted_requests: 8886 00:18:55.977 queued_requests: 1 00:18:55.977 ======================================================== 00:18:55.977 Latency(us) 00:18:55.977 Device Information : IOPS MiB/s Average min max 00:18:55.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1461.52 365.38 90460.54 51538.35 162811.73 00:18:55.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1480.50 370.12 89077.00 30722.75 163346.68 00:18:55.977 ======================================================== 00:18:55.977 Total : 2942.02 735.51 89764.31 30722.75 163346.68 00:18:55.977 00:18:55.977 05:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:55.977 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:56.235 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:18:56.235 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:56.235 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:56.235 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:56.235 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:18:56.235 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:56.235 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:18:56.235 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:56.235 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:56.235 rmmod nvme_tcp 00:18:56.235 rmmod nvme_fabrics 00:18:56.235 rmmod nvme_keyring 00:18:56.235 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:56.235 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:18:56.235 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:18:56.235 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 86139 ']' 00:18:56.235 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 86139 00:18:56.235 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 86139 ']' 00:18:56.235 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 86139 00:18:56.236 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:18:56.236 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:56.236 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86139 00:18:56.236 killing process with pid 86139 00:18:56.236 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:56.236 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:56.236 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86139' 00:18:56.236 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 86139 00:18:56.236 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 86139 00:18:57.167 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:57.167 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:57.167 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:57.167 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:57.167 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:57.167 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.167 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.167 05:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:57.167 00:18:57.167 real 0m14.518s 00:18:57.167 user 0m53.988s 00:18:57.167 sys 0m3.314s 00:18:57.167 ************************************ 00:18:57.167 END TEST nvmf_perf 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:57.167 ************************************ 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.167 ************************************ 00:18:57.167 START TEST nvmf_fio_host 00:18:57.167 ************************************ 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:57.167 * Looking for test storage... 00:18:57.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.167 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:57.168 Cannot find device "nvmf_tgt_br" 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:57.168 Cannot find device "nvmf_tgt_br2" 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:57.168 Cannot find device "nvmf_tgt_br" 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:57.168 Cannot find device "nvmf_tgt_br2" 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:57.168 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:57.168 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:57.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:57.169 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:57.169 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:57.169 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:57.169 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:57.169 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:57.169 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:57.169 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:57.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:18:57.428 00:18:57.428 --- 10.0.0.2 ping statistics --- 00:18:57.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.428 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:57.428 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:57.428 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:18:57.428 00:18:57.428 --- 10.0.0.3 ping statistics --- 00:18:57.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.428 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:57.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:18:57.428 00:18:57.428 --- 10.0.0.1 ping statistics --- 00:18:57.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.428 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=86623 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 86623 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 86623 ']' 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:57.428 05:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.428 [2024-07-25 05:25:01.557506] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:18:57.428 [2024-07-25 05:25:01.557598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.686 [2024-07-25 05:25:01.708055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:57.686 [2024-07-25 05:25:01.797272] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.686 [2024-07-25 05:25:01.797341] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.686 [2024-07-25 05:25:01.797359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.686 [2024-07-25 05:25:01.797372] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.686 [2024-07-25 05:25:01.797383] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.686 [2024-07-25 05:25:01.797487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.686 [2024-07-25 05:25:01.797594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.686 [2024-07-25 05:25:01.798232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:57.686 [2024-07-25 05:25:01.798244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.619 05:25:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:58.619 05:25:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:18:58.619 05:25:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:58.876 [2024-07-25 05:25:03.034994] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.137 05:25:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:59.137 05:25:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:59.137 05:25:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.137 05:25:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:59.394 Malloc1 00:18:59.394 05:25:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:59.652 05:25:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:59.909 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:00.167 [2024-07-25 05:25:04.284681] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.167 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:00.425 05:25:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:00.683 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:00.683 fio-3.35 00:19:00.683 Starting 1 thread 00:19:03.210 00:19:03.211 test: (groupid=0, jobs=1): err= 0: pid=86755: Thu Jul 25 05:25:07 2024 00:19:03.211 read: IOPS=8951, BW=35.0MiB/s (36.7MB/s)(70.2MiB/2007msec) 00:19:03.211 slat (usec): min=2, max=317, avg= 2.59, stdev= 3.13 00:19:03.211 clat (usec): min=3047, max=14153, avg=7455.12, stdev=585.37 00:19:03.211 lat (usec): min=3103, max=14155, avg=7457.71, stdev=585.17 00:19:03.211 clat percentiles (usec): 00:19:03.211 | 1.00th=[ 6325], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7046], 00:19:03.211 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7504], 00:19:03.211 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8356], 00:19:03.211 | 99.00th=[ 9241], 99.50th=[ 9765], 99.90th=[11994], 99.95th=[13173], 00:19:03.211 | 99.99th=[14091] 00:19:03.211 bw ( KiB/s): min=35208, max=36144, per=100.00%, avg=35810.00, stdev=436.59, samples=4 00:19:03.211 iops : min= 8802, max= 9036, avg=8952.50, stdev=109.15, samples=4 00:19:03.211 write: IOPS=8974, BW=35.1MiB/s (36.8MB/s)(70.4MiB/2007msec); 0 zone resets 00:19:03.211 slat (usec): min=2, max=266, avg= 2.67, stdev= 2.30 00:19:03.211 clat (usec): min=2336, max=13042, avg=6757.66, stdev=509.57 00:19:03.211 lat (usec): min=2350, max=13044, avg=6760.34, stdev=509.45 00:19:03.211 clat percentiles (usec): 00:19:03.211 | 1.00th=[ 5735], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6390], 00:19:03.211 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6718], 60.00th=[ 6849], 00:19:03.211 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7242], 95.00th=[ 7504], 00:19:03.211 | 99.00th=[ 8291], 99.50th=[ 8717], 99.90th=[10814], 99.95th=[11731], 00:19:03.211 | 99.99th=[13042] 00:19:03.211 bw ( KiB/s): min=35520, max=36160, per=99.98%, avg=35890.00, stdev=268.56, samples=4 00:19:03.211 iops : min= 8880, max= 9040, avg=8972.50, stdev=67.14, samples=4 00:19:03.211 lat (msec) : 4=0.07%, 10=99.68%, 20=0.25% 00:19:03.211 cpu : usr=65.80%, sys=24.33%, ctx=10, majf=0, minf=7 00:19:03.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:03.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:03.211 issued rwts: total=17966,18011,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:03.211 00:19:03.211 Run status group 0 (all jobs): 00:19:03.211 READ: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.2MiB (73.6MB), run=2007-2007msec 00:19:03.211 WRITE: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.4MiB (73.8MB), run=2007-2007msec 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:03.211 05:25:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:19:03.211 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:19:03.211 fio-3.35 00:19:03.211 Starting 1 thread 00:19:05.739 00:19:05.739 test: (groupid=0, jobs=1): err= 0: pid=86804: Thu Jul 25 05:25:09 2024 00:19:05.739 read: IOPS=7081, BW=111MiB/s (116MB/s)(222MiB/2006msec) 00:19:05.739 slat (usec): min=3, max=159, avg= 4.87, stdev= 3.27 00:19:05.739 clat (usec): min=2481, max=25632, avg=10811.03, stdev=4260.45 00:19:05.739 lat (usec): min=2485, max=25642, avg=10815.91, stdev=4262.38 00:19:05.739 clat percentiles (usec): 00:19:05.739 | 1.00th=[ 5080], 5.00th=[ 5997], 10.00th=[ 6652], 20.00th=[ 7504], 00:19:05.739 | 30.00th=[ 8160], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[10552], 00:19:05.739 | 70.00th=[11469], 80.00th=[13566], 90.00th=[17695], 95.00th=[20317], 00:19:05.739 | 99.00th=[23462], 99.50th=[23987], 99.90th=[25035], 99.95th=[25035], 00:19:05.739 | 99.99th=[25560] 00:19:05.739 bw ( KiB/s): min=35136, max=73408, per=50.54%, avg=57264.00, stdev=16039.47, samples=4 00:19:05.739 iops : min= 2196, max= 4588, avg=3579.00, stdev=1002.47, samples=4 00:19:05.739 write: IOPS=4132, BW=64.6MiB/s (67.7MB/s)(117MiB/1817msec); 0 zone resets 00:19:05.739 slat (usec): min=37, max=381, avg=40.51, stdev= 6.96 00:19:05.739 clat (usec): min=3839, max=31731, avg=12735.41, stdev=3826.28 00:19:05.739 lat (usec): min=3878, max=31778, avg=12775.93, stdev=3829.26 00:19:05.739 clat percentiles (usec): 00:19:05.739 | 1.00th=[ 7832], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10028], 00:19:05.739 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11600], 60.00th=[12125], 00:19:05.739 | 70.00th=[13042], 80.00th=[14746], 90.00th=[19268], 95.00th=[21890], 00:19:05.739 | 99.00th=[24511], 99.50th=[25297], 99.90th=[25822], 99.95th=[28705], 00:19:05.739 | 99.99th=[31851] 00:19:05.739 bw ( KiB/s): min=36736, max=76576, per=90.06%, avg=59552.00, stdev=16613.46, samples=4 00:19:05.739 iops : min= 2296, max= 4786, avg=3722.00, stdev=1038.34, samples=4 00:19:05.739 lat (msec) : 4=0.11%, 10=42.26%, 20=51.07%, 50=6.56% 00:19:05.739 cpu : usr=72.32%, sys=18.05%, ctx=7, majf=0, minf=16 00:19:05.739 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:19:05.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:05.739 issued rwts: total=14205,7509,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.739 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:05.739 00:19:05.739 Run status group 0 (all jobs): 00:19:05.740 READ: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=222MiB (233MB), run=2006-2006msec 00:19:05.740 WRITE: bw=64.6MiB/s (67.7MB/s), 64.6MiB/s-64.6MiB/s (67.7MB/s-67.7MB/s), io=117MiB (123MB), run=1817-1817msec 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:05.740 rmmod nvme_tcp 00:19:05.740 rmmod nvme_fabrics 00:19:05.740 rmmod nvme_keyring 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 86623 ']' 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 86623 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 86623 ']' 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 86623 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:05.740 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86623 00:19:06.018 killing process with pid 86623 00:19:06.018 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:06.018 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:06.018 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86623' 00:19:06.018 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 86623 00:19:06.018 05:25:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 86623 00:19:06.018 05:25:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:06.018 05:25:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:06.018 05:25:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:06.018 05:25:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.018 05:25:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:06.018 05:25:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.018 05:25:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:06.018 05:25:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.018 05:25:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:06.018 ************************************ 00:19:06.018 END TEST nvmf_fio_host 00:19:06.018 ************************************ 00:19:06.018 00:19:06.018 real 0m9.087s 00:19:06.018 user 0m37.728s 00:19:06.018 sys 0m2.206s 00:19:06.018 05:25:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:06.018 05:25:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.281 ************************************ 00:19:06.281 START TEST nvmf_failover 00:19:06.281 ************************************ 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:06.281 * Looking for test storage... 00:19:06.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.281 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:06.282 Cannot find device "nvmf_tgt_br" 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # true 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:06.282 Cannot find device "nvmf_tgt_br2" 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # true 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:06.282 Cannot find device "nvmf_tgt_br" 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:06.282 Cannot find device "nvmf_tgt_br2" 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:06.282 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:06.282 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:06.282 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:06.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:19:06.540 00:19:06.540 --- 10.0.0.2 ping statistics --- 00:19:06.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.540 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:06.540 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:06.540 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:19:06.540 00:19:06.540 --- 10.0.0.3 ping statistics --- 00:19:06.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.540 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:06.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:06.540 00:19:06.540 --- 10.0.0.1 ping statistics --- 00:19:06.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.540 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=87018 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 87018 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 87018 ']' 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:06.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:06.540 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:06.540 [2024-07-25 05:25:10.694287] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:19:06.540 [2024-07-25 05:25:10.694387] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.798 [2024-07-25 05:25:10.830915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:06.798 [2024-07-25 05:25:10.889925] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.798 [2024-07-25 05:25:10.889991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.798 [2024-07-25 05:25:10.890004] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.798 [2024-07-25 05:25:10.890012] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.798 [2024-07-25 05:25:10.890019] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.798 [2024-07-25 05:25:10.890120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.798 [2024-07-25 05:25:10.890685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:06.798 [2024-07-25 05:25:10.890720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.055 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:07.055 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:07.055 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:07.055 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:07.055 05:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:07.055 05:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.055 05:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:07.313 [2024-07-25 05:25:11.267172] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.313 05:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:07.570 Malloc0 00:19:07.570 05:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:07.828 05:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:08.085 05:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:08.343 [2024-07-25 05:25:12.362901] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.343 05:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:08.600 [2024-07-25 05:25:12.675226] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:08.600 05:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:19:08.859 [2024-07-25 05:25:12.919416] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:19:08.859 05:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87116 00:19:08.859 05:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:08.859 05:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:08.859 05:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87116 /var/tmp/bdevperf.sock 00:19:08.859 05:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 87116 ']' 00:19:08.859 05:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.859 05:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:08.859 05:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.859 05:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:08.859 05:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:09.121 05:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:09.121 05:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:09.121 05:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:09.687 NVMe0n1 00:19:09.687 05:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:09.945 00:19:09.946 05:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=87149 00:19:09.946 05:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:09.946 05:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:10.878 05:25:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:11.136 [2024-07-25 05:25:15.214415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.214865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.215001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.215085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.215149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.215221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.215273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.215349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.215409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.215467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.215550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.215628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.215707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.215768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.215819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.215888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.215978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.216054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.216114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.216172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.216243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.216313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.216382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.216440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.216498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.216560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.216617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.216691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.216750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.136 [2024-07-25 05:25:15.216807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.216879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.216949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.217025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.217097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.217164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.217223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.217293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.217364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.217435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.217506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.217578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.217649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.217725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.217800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.217871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.217930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.217997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.218060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.218120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.218190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.218241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.218289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.218365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.218443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.218509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.218580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.218632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.218687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.218748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.218816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.218876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.218936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.219038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.219114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.219189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.219240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.219295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.219371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.219434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.219503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.219574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.219644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.219702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.219752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.219824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.219895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.219970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.220035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.220103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.220163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.220232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.220374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.220445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.220506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 [2024-07-25 05:25:15.220565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62e50 is same with the state(5) to be set 00:19:11.137 05:25:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:14.418 05:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:14.675 00:19:14.675 05:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:14.932 05:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:18.211 05:25:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:18.211 [2024-07-25 05:25:22.222150] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.211 05:25:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:19.145 05:25:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:19:19.404 [2024-07-25 05:25:23.556482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.404 [2024-07-25 05:25:23.556540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.404 [2024-07-25 05:25:23.556555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.404 [2024-07-25 05:25:23.556565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.556997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.405 [2024-07-25 05:25:23.557007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf30 is same with the state(5) to be set 00:19:19.662 05:25:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 87149 00:19:24.922 0 00:19:24.922 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 87116 00:19:24.922 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 87116 ']' 00:19:24.922 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 87116 00:19:24.922 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:24.922 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:24.922 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87116 00:19:24.922 killing process with pid 87116 00:19:24.922 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:24.922 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:24.922 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87116' 00:19:24.922 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 87116 00:19:24.922 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 87116 00:19:25.186 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:25.186 [2024-07-25 05:25:12.984596] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:19:25.186 [2024-07-25 05:25:12.984725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87116 ] 00:19:25.186 [2024-07-25 05:25:13.115552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.186 [2024-07-25 05:25:13.175736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.186 Running I/O for 15 seconds... 00:19:25.186 [2024-07-25 05:25:15.221003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.221968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.221986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.222002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.222016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.222031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.222045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.222060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.222074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.222089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.222103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.222119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.222132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.222148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.222161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.222185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.186 [2024-07-25 05:25:15.222200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.222216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.186 [2024-07-25 05:25:15.222229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.222245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.186 [2024-07-25 05:25:15.222258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.222273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.186 [2024-07-25 05:25:15.222287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.222303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.186 [2024-07-25 05:25:15.222316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.222338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.186 [2024-07-25 05:25:15.222352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.222370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.186 [2024-07-25 05:25:15.222384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.222400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.186 [2024-07-25 05:25:15.222413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.222429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.186 [2024-07-25 05:25:15.222442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.186 [2024-07-25 05:25:15.222457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.186 [2024-07-25 05:25:15.222471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.222488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.222501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.222516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.222530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.222546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.222559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.222580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.222594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.222610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.222623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.222638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.222652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.222667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.222680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.222696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.222709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.222724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.222737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.222753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.222766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.222781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.222794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.222810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.222823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.222840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.222854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.222869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.222883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.222898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.222911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.222926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.222946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.222987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.223005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.223034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.223062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.223091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.223119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.223148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.223177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.223205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.223234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.223263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.223291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.223321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.223361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.223390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.223419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.187 [2024-07-25 05:25:15.223448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.223477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.223506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.223535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.223564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.223592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.223621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.223650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.223678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.223713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.223743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.223771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.223800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.223833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.223861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.187 [2024-07-25 05:25:15.223876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.187 [2024-07-25 05:25:15.223890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.223905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.223919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.223935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.223948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.223976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.223991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.188 [2024-07-25 05:25:15.224921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.188 [2024-07-25 05:25:15.224936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e118a0 is same with the state(5) to be set 00:19:25.188 [2024-07-25 05:25:15.224962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.188 [2024-07-25 05:25:15.224975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.189 [2024-07-25 05:25:15.224987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84280 len:8 PRP1 0x0 PRP2 0x0 00:19:25.189 [2024-07-25 05:25:15.225000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:15.225050] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e118a0 was disconnected and freed. reset controller. 00:19:25.189 [2024-07-25 05:25:15.225075] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:19:25.189 [2024-07-25 05:25:15.225135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.189 [2024-07-25 05:25:15.225156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:15.225172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.189 [2024-07-25 05:25:15.225186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:15.225200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.189 [2024-07-25 05:25:15.225214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:15.225228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.189 [2024-07-25 05:25:15.225241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:15.225254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:25.189 [2024-07-25 05:25:15.229283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.189 [2024-07-25 05:25:15.229326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc0e30 (9): Bad file descriptor 00:19:25.189 [2024-07-25 05:25:15.262671] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:25.189 [2024-07-25 05:25:18.930132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.930558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.930606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.930651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.930670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.930684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.930700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.930714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.930729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.930742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.930757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.930771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.930786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.930800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.930815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.930828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.930844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.930857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.930872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.930886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.930901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.930914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.930929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.930943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.930989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.189 [2024-07-25 05:25:18.931549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.189 [2024-07-25 05:25:18.931565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.931579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.931594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.931607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.931623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.931636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.931652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.931665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.931680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.931693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.931708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.931722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.931737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.931750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.931772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.931793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.931818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.931854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.931874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.931887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.931902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.931915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.931931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.931944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.931972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.931987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.932016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.932045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.932075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.932103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.932132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.932161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.932190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.932218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.932257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.932286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.932315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.932343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.932372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.932400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.932428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.932457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.190 [2024-07-25 05:25:18.932486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.190 [2024-07-25 05:25:18.932501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.932514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.932529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.932542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.932558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.932571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.932586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.932600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.932622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.932636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.932651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.932664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.932680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.932693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.932708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.932722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.932737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.932750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.932766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.932779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.932796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.932810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.932825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.932838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.932853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.932867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.932882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.932895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.932910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.932924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.932939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.932964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.932983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.933007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.933036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.933065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.933094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.933127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.933156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.933185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.933213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.933242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.933271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.933300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.933329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.933358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.933393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.933422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.933450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.191 [2024-07-25 05:25:18.933478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.191 [2024-07-25 05:25:18.933507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.191 [2024-07-25 05:25:18.933536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.191 [2024-07-25 05:25:18.933564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.191 [2024-07-25 05:25:18.933595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.191 [2024-07-25 05:25:18.933624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.191 [2024-07-25 05:25:18.933653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.191 [2024-07-25 05:25:18.933668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.191 [2024-07-25 05:25:18.933682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.933707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.933720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.933735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.933754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.933771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.933785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.933800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.933813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.933828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.933842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.933857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.933870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.933885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.933898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.933914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.933927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.933942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.933965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.933983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.933997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.934025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.934054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.934085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.934113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.934148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.934178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.192 [2024-07-25 05:25:18.934207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.934235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.934264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.934292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.934322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.934361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.934389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.934418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.934447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.192 [2024-07-25 05:25:18.934475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e36a50 is same with the state(5) to be set 00:19:25.192 [2024-07-25 05:25:18.934507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.192 [2024-07-25 05:25:18.934523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.192 [2024-07-25 05:25:18.934535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70584 len:8 PRP1 0x0 PRP2 0x0 00:19:25.192 [2024-07-25 05:25:18.934550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934598] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e36a50 was disconnected and freed. reset controller. 00:19:25.192 [2024-07-25 05:25:18.934616] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:19:25.192 [2024-07-25 05:25:18.934672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.192 [2024-07-25 05:25:18.934692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.192 [2024-07-25 05:25:18.934720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.192 [2024-07-25 05:25:18.934747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.192 [2024-07-25 05:25:18.934774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:18.934792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:25.192 [2024-07-25 05:25:18.934844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc0e30 (9): Bad file descriptor 00:19:25.192 [2024-07-25 05:25:18.938785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.192 [2024-07-25 05:25:18.970856] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:25.192 [2024-07-25 05:25:23.554199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.192 [2024-07-25 05:25:23.554270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:23.554301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.192 [2024-07-25 05:25:23.554316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:23.554331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.192 [2024-07-25 05:25:23.554344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:23.554357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.192 [2024-07-25 05:25:23.554370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.192 [2024-07-25 05:25:23.554383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0e30 is same with the state(5) to be set 00:19:25.193 [2024-07-25 05:25:23.557169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.557974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.557991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.558013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.558029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.558042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.558058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.558071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.558087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.193 [2024-07-25 05:25:23.558101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.558116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.193 [2024-07-25 05:25:23.558130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.558145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.193 [2024-07-25 05:25:23.558158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.558173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.193 [2024-07-25 05:25:23.558187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.558202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.193 [2024-07-25 05:25:23.558216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.558231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.558251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.558273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.558288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.558303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.558316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.558332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.558345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.558360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.558373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.558396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.193 [2024-07-25 05:25:23.558410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.193 [2024-07-25 05:25:23.558426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.558969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.558995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.559011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.559025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.559040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.559062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.559077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.559091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.559106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.559119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.559135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.559148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.559163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.559184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.559200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.559214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.559229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.559242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.559258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.559271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.194 [2024-07-25 05:25:23.559286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.194 [2024-07-25 05:25:23.559299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.195 [2024-07-25 05:25:23.559314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.195 [2024-07-25 05:25:23.559327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.195 [2024-07-25 05:25:23.559343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.195 [2024-07-25 05:25:23.559356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.195 [2024-07-25 05:25:23.559372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.195 [2024-07-25 05:25:23.559385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.195 [2024-07-25 05:25:23.559400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.195 [2024-07-25 05:25:23.559413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.195 [2024-07-25 05:25:23.559428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.195 [2024-07-25 05:25:23.559442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.195 [2024-07-25 05:25:23.559458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.195 [2024-07-25 05:25:23.559471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.195 [2024-07-25 05:25:23.559486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.195 [2024-07-25 05:25:23.559499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.195 [2024-07-25 05:25:23.559515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.195 [2024-07-25 05:25:23.559528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.195 [2024-07-25 05:25:23.559549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.195 [2024-07-25 05:25:23.559564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.195 [2024-07-25 05:25:23.559579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.195 [2024-07-25 05:25:23.559592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.195 [2024-07-25 05:25:23.559607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.196 [2024-07-25 05:25:23.559621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.559636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.559649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.559664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.559683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.559698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.559712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.559727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.559740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.559755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.559769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.559785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.559798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.559813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.559827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.559842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.559856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.559877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.559901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.559928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.559978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.560967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.560993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.561019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.196 [2024-07-25 05:25:23.561041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.561064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.196 [2024-07-25 05:25:23.561085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.561108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.196 [2024-07-25 05:25:23.561132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.561157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.196 [2024-07-25 05:25:23.561196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.561221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.196 [2024-07-25 05:25:23.561243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.561265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.196 [2024-07-25 05:25:23.561287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.561311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.196 [2024-07-25 05:25:23.561332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.196 [2024-07-25 05:25:23.561357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.196 [2024-07-25 05:25:23.561378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.197 [2024-07-25 05:25:23.561402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.197 [2024-07-25 05:25:23.561423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.197 [2024-07-25 05:25:23.561447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.197 [2024-07-25 05:25:23.561469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.197 [2024-07-25 05:25:23.561491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.197 [2024-07-25 05:25:23.561512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.197 [2024-07-25 05:25:23.561536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.197 [2024-07-25 05:25:23.561558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.197 [2024-07-25 05:25:23.561582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.197 [2024-07-25 05:25:23.561605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.197 [2024-07-25 05:25:23.561632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.197 [2024-07-25 05:25:23.561658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.197 [2024-07-25 05:25:23.561687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.197 [2024-07-25 05:25:23.561712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.197 [2024-07-25 05:25:23.561738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.197 [2024-07-25 05:25:23.561760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.197 [2024-07-25 05:25:23.561799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.197 [2024-07-25 05:25:23.561823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.197 [2024-07-25 05:25:23.561870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.197 [2024-07-25 05:25:23.561891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.197 [2024-07-25 05:25:23.561909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121624 len:8 PRP1 0x0 PRP2 0x0 00:19:25.197 [2024-07-25 05:25:23.561934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.197 [2024-07-25 05:25:23.562019] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e4e080 was disconnected and freed. reset controller. 00:19:25.197 [2024-07-25 05:25:23.562051] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:19:25.197 [2024-07-25 05:25:23.562078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:25.197 [2024-07-25 05:25:23.566494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.197 [2024-07-25 05:25:23.566570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc0e30 (9): Bad file descriptor 00:19:25.197 [2024-07-25 05:25:23.602506] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:25.197 00:19:25.197 Latency(us) 00:19:25.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.197 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:25.197 Verification LBA range: start 0x0 length 0x4000 00:19:25.197 NVMe0n1 : 15.01 8377.05 32.72 193.82 0.00 14901.32 618.12 52905.43 00:19:25.197 =================================================================================================================== 00:19:25.197 Total : 8377.05 32.72 193.82 0.00 14901.32 618.12 52905.43 00:19:25.197 Received shutdown signal, test time was about 15.000000 seconds 00:19:25.197 00:19:25.197 Latency(us) 00:19:25.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.197 =================================================================================================================== 00:19:25.197 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:25.197 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:25.197 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:25.197 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:25.197 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=87348 00:19:25.197 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:25.197 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 87348 /var/tmp/bdevperf.sock 00:19:25.197 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 87348 ']' 00:19:25.197 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.197 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.197 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.197 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.197 05:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:26.572 05:25:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.572 05:25:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:26.572 05:25:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:26.572 [2024-07-25 05:25:30.697791] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:26.572 05:25:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:19:26.830 [2024-07-25 05:25:30.942074] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:19:26.830 05:25:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:27.394 NVMe0n1 00:19:27.394 05:25:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:27.652 00:19:27.652 05:25:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:27.910 00:19:28.167 05:25:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:28.167 05:25:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:28.425 05:25:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:28.683 05:25:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:31.962 05:25:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:31.962 05:25:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:31.963 05:25:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=87497 00:19:31.963 05:25:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:31.963 05:25:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 87497 00:19:33.338 0 00:19:33.338 05:25:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:33.338 [2024-07-25 05:25:29.312258] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:19:33.338 [2024-07-25 05:25:29.312421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87348 ] 00:19:33.338 [2024-07-25 05:25:29.448738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.338 [2024-07-25 05:25:29.518646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.338 [2024-07-25 05:25:32.671819] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:19:33.338 [2024-07-25 05:25:32.671969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.338 [2024-07-25 05:25:32.671997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.338 [2024-07-25 05:25:32.672015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.338 [2024-07-25 05:25:32.672029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.338 [2024-07-25 05:25:32.672044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.338 [2024-07-25 05:25:32.672057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.338 [2024-07-25 05:25:32.672072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.338 [2024-07-25 05:25:32.672085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.338 [2024-07-25 05:25:32.672100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:33.338 [2024-07-25 05:25:32.672140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:33.338 [2024-07-25 05:25:32.672169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2237e30 (9): Bad file descriptor 00:19:33.338 [2024-07-25 05:25:32.676268] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:33.338 Running I/O for 1 seconds... 00:19:33.338 00:19:33.338 Latency(us) 00:19:33.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.338 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:33.338 Verification LBA range: start 0x0 length 0x4000 00:19:33.338 NVMe0n1 : 1.01 8848.68 34.57 0.00 0.00 14392.57 2308.65 15847.80 00:19:33.338 =================================================================================================================== 00:19:33.338 Total : 8848.68 34.57 0.00 0.00 14392.57 2308.65 15847.80 00:19:33.338 05:25:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:33.338 05:25:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:33.338 05:25:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:33.596 05:25:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:33.596 05:25:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:34.162 05:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:34.420 05:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:37.711 05:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:37.711 05:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:37.711 05:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 87348 00:19:37.711 05:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 87348 ']' 00:19:37.711 05:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 87348 00:19:37.711 05:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:37.711 05:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.712 05:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87348 00:19:37.712 05:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:37.712 killing process with pid 87348 00:19:37.712 05:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:37.712 05:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87348' 00:19:37.712 05:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 87348 00:19:37.712 05:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 87348 00:19:37.712 05:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:37.712 05:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:38.279 rmmod nvme_tcp 00:19:38.279 rmmod nvme_fabrics 00:19:38.279 rmmod nvme_keyring 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 87018 ']' 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 87018 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 87018 ']' 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 87018 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87018 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:38.279 killing process with pid 87018 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87018' 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 87018 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 87018 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.279 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:38.538 00:19:38.538 real 0m32.284s 00:19:38.538 user 2m7.184s 00:19:38.538 sys 0m4.430s 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:38.538 ************************************ 00:19:38.538 END TEST nvmf_failover 00:19:38.538 ************************************ 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.538 ************************************ 00:19:38.538 START TEST nvmf_host_discovery 00:19:38.538 ************************************ 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:38.538 * Looking for test storage... 00:19:38.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:38.538 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:38.539 Cannot find device "nvmf_tgt_br" 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:38.539 Cannot find device "nvmf_tgt_br2" 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:38.539 Cannot find device "nvmf_tgt_br" 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:38.539 Cannot find device "nvmf_tgt_br2" 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:19:38.539 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:38.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:38.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:38.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:19:38.797 00:19:38.797 --- 10.0.0.2 ping statistics --- 00:19:38.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.797 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:38.797 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:38.797 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:19:38.797 00:19:38.797 --- 10.0.0.3 ping statistics --- 00:19:38.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.797 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:38.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:38.797 00:19:38.797 --- 10.0.0.1 ping statistics --- 00:19:38.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.797 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:38.797 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:39.055 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:39.055 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:39.055 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:39.055 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.055 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=87803 00:19:39.055 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:39.055 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 87803 00:19:39.055 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 87803 ']' 00:19:39.056 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.056 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:39.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.056 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.056 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:39.056 05:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.056 [2024-07-25 05:25:43.037571] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:19:39.056 [2024-07-25 05:25:43.037657] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.056 [2024-07-25 05:25:43.173662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.315 [2024-07-25 05:25:43.249232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.315 [2024-07-25 05:25:43.249310] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.315 [2024-07-25 05:25:43.249330] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.315 [2024-07-25 05:25:43.249341] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.315 [2024-07-25 05:25:43.249350] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.315 [2024-07-25 05:25:43.249382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.886 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:39.886 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:39.886 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:39.886 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:39.886 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.146 [2024-07-25 05:25:44.099888] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.146 [2024-07-25 05:25:44.108017] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.146 null0 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.146 null1 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=87852 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 87852 /tmp/host.sock 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 87852 ']' 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.146 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.146 05:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.146 [2024-07-25 05:25:44.192868] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:19:40.146 [2024-07-25 05:25:44.192984] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87852 ] 00:19:40.405 [2024-07-25 05:25:44.346352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.405 [2024-07-25 05:25:44.425559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.340 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:41.341 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.599 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:41.599 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:41.599 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.599 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.599 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.599 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:41.599 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:41.599 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:41.599 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.599 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:41.599 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.600 [2024-07-25 05:25:45.584367] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:41.600 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.858 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:19:41.858 05:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:42.116 [2024-07-25 05:25:46.232982] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:42.116 [2024-07-25 05:25:46.233018] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:42.116 [2024-07-25 05:25:46.233037] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:42.374 [2024-07-25 05:25:46.319140] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:42.374 [2024-07-25 05:25:46.376025] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:42.374 [2024-07-25 05:25:46.376060] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:42.973 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.973 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:42.973 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:42.973 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:42.973 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:42.973 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:42.973 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.974 05:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:42.974 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.245 [2024-07-25 05:25:47.169128] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:43.245 [2024-07-25 05:25:47.170155] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:43.245 [2024-07-25 05:25:47.170198] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:43.245 [2024-07-25 05:25:47.256219] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.245 [2024-07-25 05:25:47.316526] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:43.245 [2024-07-25 05:25:47.316559] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:43.245 [2024-07-25 05:25:47.316567] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:19:43.245 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:44.180 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.180 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:44.180 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:44.180 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:44.180 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.180 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:44.180 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.180 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:44.180 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:44.180 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.440 [2024-07-25 05:25:48.442381] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:44.440 [2024-07-25 05:25:48.442420] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:44.440 [2024-07-25 05:25:48.443692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:44.440 [2024-07-25 05:25:48.443729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.440 [2024-07-25 05:25:48.443745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:44.440 [2024-07-25 05:25:48.443755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.440 [2024-07-25 05:25:48.443766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:44.440 [2024-07-25 05:25:48.443776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.440 [2024-07-25 05:25:48.443787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:44.440 [2024-07-25 05:25:48.443797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.440 [2024-07-25 05:25:48.443807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ac50 is same with the state(5) to be set 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:44.440 [2024-07-25 05:25:48.453650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8ac50 (9): Bad file descriptor 00:19:44.440 [2024-07-25 05:25:48.463683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:44.440 [2024-07-25 05:25:48.463852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:44.440 [2024-07-25 05:25:48.463879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8ac50 with addr=10.0.0.2, port=4420 00:19:44.440 [2024-07-25 05:25:48.463894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ac50 is same with the state(5) to be set 00:19:44.440 [2024-07-25 05:25:48.463915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8ac50 (9): Bad file descriptor 00:19:44.440 [2024-07-25 05:25:48.463933] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:44.440 [2024-07-25 05:25:48.463943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:44.440 [2024-07-25 05:25:48.463974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:44.440 [2024-07-25 05:25:48.463995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:44.440 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.440 [2024-07-25 05:25:48.473778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:44.440 [2024-07-25 05:25:48.473951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:44.440 [2024-07-25 05:25:48.473995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8ac50 with addr=10.0.0.2, port=4420 00:19:44.440 [2024-07-25 05:25:48.474009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ac50 is same with the state(5) to be set 00:19:44.440 [2024-07-25 05:25:48.474031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8ac50 (9): Bad file descriptor 00:19:44.440 [2024-07-25 05:25:48.474048] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:44.440 [2024-07-25 05:25:48.474059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:44.440 [2024-07-25 05:25:48.474070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:44.440 [2024-07-25 05:25:48.474089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:44.440 [2024-07-25 05:25:48.483868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:44.440 [2024-07-25 05:25:48.484046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:44.440 [2024-07-25 05:25:48.484075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8ac50 with addr=10.0.0.2, port=4420 00:19:44.440 [2024-07-25 05:25:48.484089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ac50 is same with the state(5) to be set 00:19:44.440 [2024-07-25 05:25:48.484111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8ac50 (9): Bad file descriptor 00:19:44.440 [2024-07-25 05:25:48.484129] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:44.440 [2024-07-25 05:25:48.484139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:44.440 [2024-07-25 05:25:48.484150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:44.440 [2024-07-25 05:25:48.484167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:44.440 [2024-07-25 05:25:48.493965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:44.440 [2024-07-25 05:25:48.494073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:44.440 [2024-07-25 05:25:48.494097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8ac50 with addr=10.0.0.2, port=4420 00:19:44.440 [2024-07-25 05:25:48.494109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ac50 is same with the state(5) to be set 00:19:44.440 [2024-07-25 05:25:48.494129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8ac50 (9): Bad file descriptor 00:19:44.441 [2024-07-25 05:25:48.494145] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:44.441 [2024-07-25 05:25:48.494155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:44.441 [2024-07-25 05:25:48.494167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:44.441 [2024-07-25 05:25:48.494183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:44.441 [2024-07-25 05:25:48.504036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:44.441 [2024-07-25 05:25:48.504169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:44.441 [2024-07-25 05:25:48.504193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8ac50 with addr=10.0.0.2, port=4420 00:19:44.441 [2024-07-25 05:25:48.504206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ac50 is same with the state(5) to be set 00:19:44.441 [2024-07-25 05:25:48.504225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8ac50 (9): Bad file descriptor 00:19:44.441 [2024-07-25 05:25:48.504241] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:44.441 [2024-07-25 05:25:48.504251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:44.441 [2024-07-25 05:25:48.504263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:44.441 [2024-07-25 05:25:48.504280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:44.441 [2024-07-25 05:25:48.514103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:44.441 [2024-07-25 05:25:48.514203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:44.441 [2024-07-25 05:25:48.514226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8ac50 with addr=10.0.0.2, port=4420 00:19:44.441 [2024-07-25 05:25:48.514238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ac50 is same with the state(5) to be set 00:19:44.441 [2024-07-25 05:25:48.514256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8ac50 (9): Bad file descriptor 00:19:44.441 [2024-07-25 05:25:48.514272] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:44.441 [2024-07-25 05:25:48.514282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:44.441 [2024-07-25 05:25:48.514293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:44.441 [2024-07-25 05:25:48.514309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:44.441 [2024-07-25 05:25:48.524163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:44.441 [2024-07-25 05:25:48.524283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:44.441 [2024-07-25 05:25:48.524307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8ac50 with addr=10.0.0.2, port=4420 00:19:44.441 [2024-07-25 05:25:48.524319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ac50 is same with the state(5) to be set 00:19:44.441 [2024-07-25 05:25:48.524336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8ac50 (9): Bad file descriptor 00:19:44.441 [2024-07-25 05:25:48.524352] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:44.441 [2024-07-25 05:25:48.524362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:44.441 [2024-07-25 05:25:48.524372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:44.441 [2024-07-25 05:25:48.524388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:44.441 [2024-07-25 05:25:48.529003] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:19:44.441 [2024-07-25 05:25:48.529032] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:44.441 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:44.699 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:44.700 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.958 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:44.958 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:44.958 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:44.958 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.958 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:44.958 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.958 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.892 [2024-07-25 05:25:49.893433] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:45.892 [2024-07-25 05:25:49.893462] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:45.892 [2024-07-25 05:25:49.893482] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:45.892 [2024-07-25 05:25:49.979566] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:19:45.892 [2024-07-25 05:25:50.039982] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:45.892 [2024-07-25 05:25:50.040040] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.892 2024/07/25 05:25:50 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:19:45.892 request: 00:19:45.892 { 00:19:45.892 "method": "bdev_nvme_start_discovery", 00:19:45.892 "params": { 00:19:45.892 "name": "nvme", 00:19:45.892 "trtype": "tcp", 00:19:45.892 "traddr": "10.0.0.2", 00:19:45.892 "adrfam": "ipv4", 00:19:45.892 "trsvcid": "8009", 00:19:45.892 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:45.892 "wait_for_attach": true 00:19:45.892 } 00:19:45.892 } 00:19:45.892 Got JSON-RPC error response 00:19:45.892 GoRPCClient: error on JSON-RPC call 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:45.892 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:45.893 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.893 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.151 2024/07/25 05:25:50 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:19:46.151 request: 00:19:46.151 { 00:19:46.151 "method": "bdev_nvme_start_discovery", 00:19:46.151 "params": { 00:19:46.151 "name": "nvme_second", 00:19:46.151 "trtype": "tcp", 00:19:46.151 "traddr": "10.0.0.2", 00:19:46.151 "adrfam": "ipv4", 00:19:46.151 "trsvcid": "8009", 00:19:46.151 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:46.151 "wait_for_attach": true 00:19:46.151 } 00:19:46.151 } 00:19:46.151 Got JSON-RPC error response 00:19:46.151 GoRPCClient: error on JSON-RPC call 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:46.151 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:46.152 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:46.152 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:46.152 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:46.152 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:46.152 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:46.152 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:46.152 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.152 05:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.547 [2024-07-25 05:25:51.316902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:47.547 [2024-07-25 05:25:51.316978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b847b0 with addr=10.0.0.2, port=8010 00:19:47.547 [2024-07-25 05:25:51.317002] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:47.547 [2024-07-25 05:25:51.317014] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:47.547 [2024-07-25 05:25:51.317025] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:48.482 [2024-07-25 05:25:52.316913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:48.482 [2024-07-25 05:25:52.317004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b847b0 with addr=10.0.0.2, port=8010 00:19:48.482 [2024-07-25 05:25:52.317028] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:48.482 [2024-07-25 05:25:52.317039] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:48.482 [2024-07-25 05:25:52.317050] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:49.418 [2024-07-25 05:25:53.316761] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:19:49.418 2024/07/25 05:25:53 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:19:49.418 request: 00:19:49.418 { 00:19:49.418 "method": "bdev_nvme_start_discovery", 00:19:49.418 "params": { 00:19:49.418 "name": "nvme_second", 00:19:49.418 "trtype": "tcp", 00:19:49.418 "traddr": "10.0.0.2", 00:19:49.418 "adrfam": "ipv4", 00:19:49.418 "trsvcid": "8010", 00:19:49.418 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:49.418 "wait_for_attach": false, 00:19:49.418 "attach_timeout_ms": 3000 00:19:49.418 } 00:19:49.418 } 00:19:49.418 Got JSON-RPC error response 00:19:49.418 GoRPCClient: error on JSON-RPC call 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 87852 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:49.418 rmmod nvme_tcp 00:19:49.418 rmmod nvme_fabrics 00:19:49.418 rmmod nvme_keyring 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 87803 ']' 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 87803 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 87803 ']' 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 87803 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87803 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:49.418 killing process with pid 87803 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:49.418 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87803' 00:19:49.419 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 87803 00:19:49.419 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 87803 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:49.677 00:19:49.677 real 0m11.174s 00:19:49.677 user 0m22.213s 00:19:49.677 sys 0m1.507s 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.677 ************************************ 00:19:49.677 END TEST nvmf_host_discovery 00:19:49.677 ************************************ 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.677 ************************************ 00:19:49.677 START TEST nvmf_host_multipath_status 00:19:49.677 ************************************ 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:49.677 * Looking for test storage... 00:19:49.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.677 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:49.678 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:49.936 Cannot find device "nvmf_tgt_br" 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:49.936 Cannot find device "nvmf_tgt_br2" 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:49.936 Cannot find device "nvmf_tgt_br" 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:49.936 Cannot find device "nvmf_tgt_br2" 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:49.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:49.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:19:49.936 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:49.937 05:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:49.937 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:49.937 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:49.937 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:49.937 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:49.937 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:49.937 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:49.937 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:49.937 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:49.937 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:49.937 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:49.937 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:49.937 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:49.937 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:50.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:19:50.195 00:19:50.195 --- 10.0.0.2 ping statistics --- 00:19:50.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.195 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:50.195 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:50.195 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:19:50.195 00:19:50.195 --- 10.0.0.3 ping statistics --- 00:19:50.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.195 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:50.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:50.195 00:19:50.195 --- 10.0.0.1 ping statistics --- 00:19:50.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.195 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=88331 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 88331 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 88331 ']' 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.195 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:50.195 [2024-07-25 05:25:54.283109] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:19:50.195 [2024-07-25 05:25:54.283213] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.453 [2024-07-25 05:25:54.422824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:50.453 [2024-07-25 05:25:54.481361] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.453 [2024-07-25 05:25:54.481584] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.453 [2024-07-25 05:25:54.481606] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.453 [2024-07-25 05:25:54.481615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.453 [2024-07-25 05:25:54.481623] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.453 [2024-07-25 05:25:54.481737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.453 [2024-07-25 05:25:54.481909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.388 05:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:51.388 05:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:19:51.388 05:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:51.388 05:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:51.388 05:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:51.388 05:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.388 05:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=88331 00:19:51.388 05:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:51.645 [2024-07-25 05:25:55.657304] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.645 05:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:51.904 Malloc0 00:19:51.904 05:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:52.162 05:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:52.419 05:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.677 [2024-07-25 05:25:56.841203] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.935 05:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:52.935 [2024-07-25 05:25:57.089357] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:52.935 05:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=88435 00:19:52.935 05:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:52.935 05:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:52.935 05:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 88435 /var/tmp/bdevperf.sock 00:19:52.935 05:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 88435 ']' 00:19:52.935 05:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.192 05:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:53.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.192 05:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.193 05:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:53.193 05:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:53.451 05:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.451 05:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:19:53.451 05:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:53.709 05:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:53.967 Nvme0n1 00:19:53.967 05:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:54.532 Nvme0n1 00:19:54.532 05:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:54.532 05:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:56.429 05:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:56.429 05:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:56.687 05:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:56.945 05:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:58.344 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:58.344 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:58.344 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.344 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:58.344 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.344 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:58.344 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.344 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:58.602 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:58.602 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:58.602 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.602 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:58.861 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.861 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:58.861 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:58.861 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.119 05:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.119 05:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:59.119 05:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.119 05:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:59.376 05:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.377 05:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:59.377 05:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.377 05:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:59.634 05:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.634 05:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:59.634 05:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:00.200 05:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:00.200 05:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:20:01.575 05:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:20:01.575 05:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:01.575 05:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.575 05:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:01.575 05:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:01.575 05:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:01.575 05:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.575 05:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:01.833 05:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.833 05:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:01.833 05:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:01.834 05:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.103 05:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.103 05:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:02.103 05:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:02.103 05:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.361 05:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.361 05:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:02.361 05:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.361 05:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:02.618 05:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.618 05:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:02.618 05:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:02.618 05:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.876 05:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.876 05:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:20:02.876 05:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:03.442 05:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:20:03.442 05:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:20:04.818 05:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:20:04.818 05:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:04.818 05:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.818 05:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:04.818 05:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.818 05:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:04.818 05:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.818 05:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:05.076 05:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:05.076 05:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:05.076 05:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.076 05:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:05.643 05:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.643 05:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:05.643 05:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.643 05:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:05.643 05:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.643 05:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:05.643 05:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.643 05:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:06.209 05:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.209 05:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:06.209 05:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.209 05:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:06.467 05:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.467 05:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:20:06.467 05:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:06.726 05:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:06.984 05:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:20:07.918 05:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:20:07.918 05:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:07.918 05:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.918 05:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:08.176 05:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:08.176 05:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:08.176 05:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:08.176 05:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:08.742 05:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:08.742 05:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:08.742 05:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:08.742 05:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:09.000 05:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.000 05:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:09.000 05:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.000 05:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:09.259 05:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.259 05:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:09.259 05:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:09.259 05:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.517 05:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.517 05:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:09.517 05:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.517 05:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:09.774 05:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:09.774 05:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:20:09.774 05:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:10.032 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:10.598 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:11.533 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:11.533 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:11.533 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.533 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:11.790 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:11.790 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:11.790 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:11.790 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.048 05:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:12.048 05:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:12.049 05:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.049 05:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:12.307 05:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:12.307 05:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:12.307 05:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:12.307 05:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.565 05:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:12.565 05:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:12.565 05:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.565 05:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:12.823 05:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:12.823 05:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:12.823 05:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.823 05:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:13.390 05:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:13.390 05:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:20:13.390 05:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:13.648 05:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:13.916 05:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:14.869 05:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:14.869 05:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:14.869 05:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:14.869 05:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.128 05:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:15.128 05:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:15.128 05:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.128 05:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:15.694 05:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.694 05:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:15.694 05:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:15.694 05:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.951 05:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.951 05:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:15.951 05:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:15.951 05:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.209 05:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.209 05:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:16.209 05:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.209 05:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:16.467 05:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:16.467 05:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:16.467 05:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:16.467 05:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.726 05:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.726 05:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:16.985 05:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:16.985 05:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:20:17.552 05:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:17.552 05:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:18.925 05:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:18.925 05:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:18.925 05:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:18.925 05:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:18.925 05:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:18.925 05:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:18.925 05:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:18.925 05:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:19.183 05:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.183 05:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:19.183 05:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.183 05:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:19.748 05:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.748 05:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:19.748 05:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.748 05:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:19.748 05:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.748 05:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:19.748 05:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.748 05:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:20.311 05:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.311 05:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:20.311 05:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:20.311 05:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.567 05:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.567 05:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:20.567 05:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:20.825 05:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:21.082 05:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:22.018 05:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:22.018 05:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:22.018 05:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.018 05:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:22.277 05:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:22.277 05:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:22.277 05:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.277 05:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:22.534 05:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.534 05:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:22.534 05:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.534 05:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:22.793 05:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.793 05:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:22.793 05:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.793 05:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:23.053 05:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:23.053 05:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:23.053 05:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:23.053 05:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.618 05:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:23.618 05:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:23.618 05:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.618 05:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:23.618 05:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:23.618 05:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:20:23.618 05:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:23.876 05:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:20:24.134 05:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:25.509 05:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:25.509 05:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:25.509 05:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:25.509 05:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.509 05:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.509 05:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:25.509 05:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.509 05:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:25.766 05:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.766 05:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:25.766 05:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:25.766 05:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.025 05:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:26.025 05:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:26.025 05:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.025 05:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:26.283 05:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:26.283 05:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:26.283 05:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.283 05:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:26.541 05:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:26.541 05:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:26.541 05:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.541 05:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:26.800 05:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:26.800 05:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:26.800 05:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:27.057 05:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:27.344 05:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:28.277 05:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:28.277 05:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:28.278 05:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.278 05:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:28.842 05:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:28.842 05:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:28.842 05:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.842 05:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:29.100 05:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:29.100 05:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:29.100 05:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:29.100 05:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.357 05:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:29.357 05:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:29.357 05:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.357 05:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:29.615 05:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:29.615 05:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:29.615 05:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:29.615 05:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.874 05:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:29.874 05:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:29.874 05:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.874 05:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:30.132 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:30.132 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 88435 00:20:30.132 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 88435 ']' 00:20:30.132 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 88435 00:20:30.132 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:30.132 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:30.132 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88435 00:20:30.132 killing process with pid 88435 00:20:30.132 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:30.132 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:30.132 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88435' 00:20:30.132 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 88435 00:20:30.132 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 88435 00:20:30.132 Connection closed with partial response: 00:20:30.132 00:20:30.132 00:20:30.394 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 88435 00:20:30.394 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:30.394 [2024-07-25 05:25:57.153550] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:20:30.394 [2024-07-25 05:25:57.153658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88435 ] 00:20:30.394 [2024-07-25 05:25:57.286297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.394 [2024-07-25 05:25:57.355711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.394 Running I/O for 90 seconds... 00:20:30.394 [2024-07-25 05:26:14.164220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.164295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.164359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.394 [2024-07-25 05:26:14.164381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.164406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.394 [2024-07-25 05:26:14.164422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.164444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.394 [2024-07-25 05:26:14.164460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.164483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.394 [2024-07-25 05:26:14.164499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.164522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.394 [2024-07-25 05:26:14.164537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.164559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.394 [2024-07-25 05:26:14.164575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.164598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.394 [2024-07-25 05:26:14.164613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.164636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.394 [2024-07-25 05:26:14.164652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.164674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.164690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.164713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.164748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.164773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.164789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.164811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.164827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.164849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.164865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.164887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.164903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.164926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.164942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.164980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.164998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.165021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.165037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.165060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.165076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.165098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.165114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.165136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.165152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.165174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.165191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.165214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.165229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.165262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.165289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.165312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.165328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.165351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.394 [2024-07-25 05:26:14.165367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:30.394 [2024-07-25 05:26:14.165389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.165405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.165427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.165443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.165466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.165482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.165504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.165520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.165542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.165558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.165581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.165598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.165620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.165636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.165839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.165865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.165895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.165913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.165950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.165985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.166948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.166986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.167023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.167052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.167068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.167093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.167109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.167134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.167152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.167177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.167192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:30.395 [2024-07-25 05:26:14.167217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-07-25 05:26:14.167234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.167275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.167322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.167363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.167404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.167444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.167492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.167532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.167582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.167623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.167663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.167704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.167745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.167785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.167826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.167867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.167907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.167948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.167989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.168007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.168032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.168048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.168081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.168098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.168123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.168140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.168164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.168180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.168205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.168221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.168245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.168261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.168286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.168302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.168326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.168343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.168367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.168383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.168408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.168424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.168448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.168464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.168489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.168505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.168530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.168546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.168571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.168594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.168864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.168891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.168930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.168949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.168995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.169012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.169042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.169059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.169089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.169105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:30.396 [2024-07-25 05:26:14.169134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-07-25 05:26:14.169151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.169181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:14.169197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.169226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:14.169242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.169272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:14.169288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.169319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:14.169335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.169365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:14.169381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.169411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:14.169437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.169469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:14.169485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.169515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:14.169532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.169563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:14.169579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.169609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-07-25 05:26:14.169626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.169656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-07-25 05:26:14.169673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.169705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-07-25 05:26:14.169722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.169751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-07-25 05:26:14.169768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.169798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-07-25 05:26:14.169814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.169844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-07-25 05:26:14.169860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.169890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-07-25 05:26:14.169906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.169936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-07-25 05:26:14.169966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.170000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-07-25 05:26:14.170017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.170056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-07-25 05:26:14.170074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.170104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-07-25 05:26:14.170120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.170150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-07-25 05:26:14.170166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.170196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-07-25 05:26:14.170212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.170242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-07-25 05:26:14.170259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.170289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-07-25 05:26:14.170319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:14.170351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:14.170367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:31.429501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:31.429570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:31.429627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:31.429648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:31.429672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:31.429689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:31.429711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:31.429727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:31.429748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:31.429764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:31.429814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:31.429831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:31.429852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:31.429867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:31.429889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:31.429904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:31.429925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:31.429940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:31.429978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:31.429996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:31.430018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:31.430033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:31.430055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:31.430070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:31.430091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:31.430107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:30.397 [2024-07-25 05:26:31.430128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-07-25 05:26:31.430144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.430165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.430181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.430203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-07-25 05:26:31.430218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.431621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.431650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.431678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.431710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.431735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.431751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.431772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.431787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.431809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.431825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.431847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.431862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.431883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.431899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.431921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.431936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.431971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.431990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-07-25 05:26:31.432261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:30.398 [2024-07-25 05:26:31.432779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-07-25 05:26:31.432794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.432815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-07-25 05:26:31.432830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.432852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-07-25 05:26:31.432867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.432889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-07-25 05:26:31.432904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.432926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-07-25 05:26:31.432941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.432977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-07-25 05:26:31.432995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.433017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-07-25 05:26:31.433032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.433055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-07-25 05:26:31.433071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.433093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-07-25 05:26:31.433109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.433143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-07-25 05:26:31.433159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.433181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-07-25 05:26:31.433196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.433218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-07-25 05:26:31.433234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.433255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-07-25 05:26:31.433270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.433292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-07-25 05:26:31.433307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.433330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-07-25 05:26:31.433344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.433366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-07-25 05:26:31.433382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.433403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-07-25 05:26:31.433418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.433440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-07-25 05:26:31.433455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:30.399 [2024-07-25 05:26:31.433477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-07-25 05:26:31.433493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:30.399 Received shutdown signal, test time was about 35.573160 seconds 00:20:30.399 00:20:30.399 Latency(us) 00:20:30.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.399 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:30.399 Verification LBA range: start 0x0 length 0x4000 00:20:30.399 Nvme0n1 : 35.57 8410.61 32.85 0.00 0.00 15187.61 170.36 4026531.84 00:20:30.399 =================================================================================================================== 00:20:30.399 Total : 8410.61 32.85 0.00 0.00 15187.61 170.36 4026531.84 00:20:30.399 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:30.657 rmmod nvme_tcp 00:20:30.657 rmmod nvme_fabrics 00:20:30.657 rmmod nvme_keyring 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 88331 ']' 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 88331 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 88331 ']' 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 88331 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88331 00:20:30.657 killing process with pid 88331 00:20:30.657 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:30.658 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:30.658 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88331' 00:20:30.658 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 88331 00:20:30.658 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 88331 00:20:30.916 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:30.916 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:30.916 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:30.916 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:30.916 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:30.916 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.916 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.916 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.916 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:30.916 00:20:30.916 real 0m41.233s 00:20:30.916 user 2m16.263s 00:20:30.916 sys 0m9.822s 00:20:30.916 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.916 05:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:30.916 ************************************ 00:20:30.916 END TEST nvmf_host_multipath_status 00:20:30.916 ************************************ 00:20:30.916 05:26:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:30.916 05:26:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:30.916 05:26:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:30.916 05:26:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.916 ************************************ 00:20:30.916 START TEST nvmf_discovery_remove_ifc 00:20:30.916 ************************************ 00:20:30.916 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:31.174 * Looking for test storage... 00:20:31.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:31.174 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:31.175 Cannot find device "nvmf_tgt_br" 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:31.175 Cannot find device "nvmf_tgt_br2" 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:31.175 Cannot find device "nvmf_tgt_br" 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:31.175 Cannot find device "nvmf_tgt_br2" 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:31.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:31.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:31.175 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:31.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:31.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:20:31.448 00:20:31.448 --- 10.0.0.2 ping statistics --- 00:20:31.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.448 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:31.448 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:31.448 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:20:31.448 00:20:31.448 --- 10.0.0.3 ping statistics --- 00:20:31.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.448 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:31.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:31.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:31.448 00:20:31.448 --- 10.0.0.1 ping statistics --- 00:20:31.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.448 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=89741 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 89741 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:31.448 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 89741 ']' 00:20:31.449 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.449 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:31.449 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.449 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:31.449 05:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:31.449 [2024-07-25 05:26:35.587700] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:20:31.449 [2024-07-25 05:26:35.587809] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.716 [2024-07-25 05:26:35.727709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.716 [2024-07-25 05:26:35.786310] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.716 [2024-07-25 05:26:35.786362] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.716 [2024-07-25 05:26:35.786379] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.716 [2024-07-25 05:26:35.786392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.716 [2024-07-25 05:26:35.786404] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.716 [2024-07-25 05:26:35.786445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.650 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:32.650 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:20:32.650 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:32.650 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:32.650 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:32.650 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.650 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:32.650 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.650 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:32.650 [2024-07-25 05:26:36.649796] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.650 [2024-07-25 05:26:36.657900] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:32.650 null0 00:20:32.650 [2024-07-25 05:26:36.689871] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.650 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.650 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=89791 00:20:32.651 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:32.651 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 89791 /tmp/host.sock 00:20:32.651 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 89791 ']' 00:20:32.651 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:20:32.651 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:32.651 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:32.651 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:32.651 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:32.651 05:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:32.651 [2024-07-25 05:26:36.765169] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:20:32.651 [2024-07-25 05:26:36.765253] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89791 ] 00:20:32.908 [2024-07-25 05:26:36.902800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.908 [2024-07-25 05:26:36.997372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.843 05:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:33.843 05:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:20:33.843 05:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:33.843 05:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:33.843 05:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.843 05:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:33.843 05:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.843 05:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:33.843 05:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.843 05:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:33.843 05:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.843 05:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:33.843 05:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.843 05:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:34.779 [2024-07-25 05:26:38.823726] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:34.779 [2024-07-25 05:26:38.823758] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:34.779 [2024-07-25 05:26:38.823778] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:34.779 [2024-07-25 05:26:38.909870] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:20:35.037 [2024-07-25 05:26:38.966781] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:35.037 [2024-07-25 05:26:38.966860] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:35.037 [2024-07-25 05:26:38.966891] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:35.037 [2024-07-25 05:26:38.966908] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:35.037 [2024-07-25 05:26:38.966934] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:35.037 05:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.037 05:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:35.037 05:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:35.037 05:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:35.037 [2024-07-25 05:26:38.972387] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6ba650 was disconnected and freed. delete nvme_qpair. 00:20:35.037 05:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:35.037 05:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:35.037 05:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.037 05:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:35.037 05:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:35.037 05:26:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.037 05:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:35.037 05:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:20:35.037 05:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:35.037 05:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:35.037 05:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:35.037 05:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:35.037 05:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:35.037 05:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.037 05:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:35.037 05:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:35.037 05:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:35.037 05:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.037 05:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:35.037 05:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:35.971 05:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:35.971 05:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:35.971 05:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.971 05:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:35.971 05:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:35.971 05:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:35.971 05:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:35.971 05:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.228 05:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:36.228 05:26:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:37.162 05:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:37.162 05:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:37.162 05:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.162 05:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:37.162 05:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:37.162 05:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:37.162 05:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:37.162 05:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.162 05:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:37.162 05:26:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:38.095 05:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:38.095 05:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:38.095 05:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.095 05:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:38.095 05:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:38.095 05:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:38.095 05:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:38.095 05:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.353 05:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:38.353 05:26:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:39.286 05:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:39.286 05:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:39.286 05:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.286 05:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:39.286 05:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:39.286 05:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:39.286 05:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:39.286 05:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.286 05:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:39.286 05:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:40.220 05:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:40.220 05:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:40.220 05:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.220 05:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:40.220 05:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:40.220 05:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:40.220 05:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:40.220 05:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.478 [2024-07-25 05:26:44.394724] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:40.478 [2024-07-25 05:26:44.394817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.478 [2024-07-25 05:26:44.394833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.478 [2024-07-25 05:26:44.394847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.478 [2024-07-25 05:26:44.394857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.478 [2024-07-25 05:26:44.394866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.478 [2024-07-25 05:26:44.394875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.478 [2024-07-25 05:26:44.394885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.478 [2024-07-25 05:26:44.394894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.478 [2024-07-25 05:26:44.394905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.478 [2024-07-25 05:26:44.394914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.478 [2024-07-25 05:26:44.394924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x683900 is same with the state(5) to be set 00:20:40.478 05:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:40.478 05:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:40.478 [2024-07-25 05:26:44.404713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x683900 (9): Bad file descriptor 00:20:40.478 [2024-07-25 05:26:44.414761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:41.413 05:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:41.413 05:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:41.413 05:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:41.413 05:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.413 05:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:41.413 05:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:41.413 05:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:41.413 [2024-07-25 05:26:45.446071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:20:41.413 [2024-07-25 05:26:45.446216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x683900 with addr=10.0.0.2, port=4420 00:20:41.413 [2024-07-25 05:26:45.446254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x683900 is same with the state(5) to be set 00:20:41.413 [2024-07-25 05:26:45.446321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x683900 (9): Bad file descriptor 00:20:41.413 [2024-07-25 05:26:45.447261] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:41.413 [2024-07-25 05:26:45.447350] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:41.413 [2024-07-25 05:26:45.447375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:41.413 [2024-07-25 05:26:45.447397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:41.413 [2024-07-25 05:26:45.447479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.413 [2024-07-25 05:26:45.447507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:41.414 05:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.414 05:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:41.414 05:26:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:42.347 [2024-07-25 05:26:46.447565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:42.347 [2024-07-25 05:26:46.447636] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:42.347 [2024-07-25 05:26:46.447649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:42.347 [2024-07-25 05:26:46.447659] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:20:42.347 [2024-07-25 05:26:46.447685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:42.348 [2024-07-25 05:26:46.447716] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:20:42.348 [2024-07-25 05:26:46.447778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.348 [2024-07-25 05:26:46.447795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.348 [2024-07-25 05:26:46.447809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.348 [2024-07-25 05:26:46.447818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.348 [2024-07-25 05:26:46.447828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.348 [2024-07-25 05:26:46.447838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.348 [2024-07-25 05:26:46.447848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.348 [2024-07-25 05:26:46.447857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.348 [2024-07-25 05:26:46.447867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.348 [2024-07-25 05:26:46.447876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.348 [2024-07-25 05:26:46.447886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:20:42.348 [2024-07-25 05:26:46.448436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6263e0 (9): Bad file descriptor 00:20:42.348 [2024-07-25 05:26:46.449448] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:42.348 [2024-07-25 05:26:46.449473] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:20:42.348 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:42.348 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:42.348 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:42.348 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.348 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:42.348 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:42.348 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:42.348 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.607 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:42.607 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:42.607 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:42.607 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:42.607 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:42.607 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:42.607 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:42.607 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.607 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:42.607 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:42.607 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:42.607 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.607 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:42.607 05:26:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:43.553 05:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:43.553 05:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:43.553 05:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:43.553 05:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.553 05:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:43.553 05:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:43.553 05:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:43.553 05:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.553 05:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:43.553 05:26:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:44.488 [2024-07-25 05:26:48.455460] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:44.488 [2024-07-25 05:26:48.455516] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:44.488 [2024-07-25 05:26:48.455549] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:44.488 [2024-07-25 05:26:48.541592] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:20:44.488 [2024-07-25 05:26:48.597692] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:44.488 [2024-07-25 05:26:48.597753] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:44.488 [2024-07-25 05:26:48.597779] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:44.488 [2024-07-25 05:26:48.597796] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:20:44.488 [2024-07-25 05:26:48.597806] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:44.488 [2024-07-25 05:26:48.604357] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x69f390 was disconnected and freed. delete nvme_qpair. 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 89791 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 89791 ']' 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 89791 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89791 00:20:44.746 killing process with pid 89791 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89791' 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 89791 00:20:44.746 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 89791 00:20:45.004 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:45.004 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:45.004 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:20:45.004 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:45.004 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:20:45.004 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:45.004 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:45.004 rmmod nvme_tcp 00:20:45.004 rmmod nvme_fabrics 00:20:45.004 rmmod nvme_keyring 00:20:45.004 05:26:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:45.004 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:20:45.004 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:20:45.004 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 89741 ']' 00:20:45.004 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 89741 00:20:45.004 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 89741 ']' 00:20:45.004 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 89741 00:20:45.004 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:20:45.004 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:45.004 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89741 00:20:45.004 killing process with pid 89741 00:20:45.004 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:45.004 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:45.004 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89741' 00:20:45.004 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 89741 00:20:45.004 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 89741 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:45.262 ************************************ 00:20:45.262 END TEST nvmf_discovery_remove_ifc 00:20:45.262 ************************************ 00:20:45.262 00:20:45.262 real 0m14.189s 00:20:45.262 user 0m25.561s 00:20:45.262 sys 0m1.530s 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.262 ************************************ 00:20:45.262 START TEST nvmf_identify_kernel_target 00:20:45.262 ************************************ 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:45.262 * Looking for test storage... 00:20:45.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:45.262 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.263 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:45.263 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:45.263 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:45.263 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:45.263 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:45.263 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:45.263 Cannot find device "nvmf_tgt_br" 00:20:45.263 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:20:45.263 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:45.263 Cannot find device "nvmf_tgt_br2" 00:20:45.263 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:20:45.263 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:45.263 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:45.263 Cannot find device "nvmf_tgt_br" 00:20:45.263 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:20:45.263 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:45.521 Cannot find device "nvmf_tgt_br2" 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:45.521 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:45.521 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:45.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:20:45.521 00:20:45.521 --- 10.0.0.2 ping statistics --- 00:20:45.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.521 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:20:45.521 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:45.521 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:45.521 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:20:45.521 00:20:45.521 --- 10.0.0.3 ping statistics --- 00:20:45.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.522 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:45.522 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:45.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:45.780 00:20:45.780 --- 10.0.0.1 ping statistics --- 00:20:45.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.780 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:45.780 05:26:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:46.038 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:46.038 Waiting for block devices as requested 00:20:46.038 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:46.296 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:46.296 No valid GPT data, bailing 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:46.296 No valid GPT data, bailing 00:20:46.296 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:46.554 No valid GPT data, bailing 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:46.554 No valid GPT data, bailing 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -a 10.0.0.1 -t tcp -s 4420 00:20:46.554 00:20:46.554 Discovery Log Number of Records 2, Generation counter 2 00:20:46.554 =====Discovery Log Entry 0====== 00:20:46.554 trtype: tcp 00:20:46.554 adrfam: ipv4 00:20:46.554 subtype: current discovery subsystem 00:20:46.554 treq: not specified, sq flow control disable supported 00:20:46.554 portid: 1 00:20:46.554 trsvcid: 4420 00:20:46.554 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:46.554 traddr: 10.0.0.1 00:20:46.554 eflags: none 00:20:46.554 sectype: none 00:20:46.554 =====Discovery Log Entry 1====== 00:20:46.554 trtype: tcp 00:20:46.554 adrfam: ipv4 00:20:46.554 subtype: nvme subsystem 00:20:46.554 treq: not specified, sq flow control disable supported 00:20:46.554 portid: 1 00:20:46.554 trsvcid: 4420 00:20:46.554 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:46.554 traddr: 10.0.0.1 00:20:46.554 eflags: none 00:20:46.554 sectype: none 00:20:46.554 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:46.554 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:46.812 ===================================================== 00:20:46.812 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:46.812 ===================================================== 00:20:46.812 Controller Capabilities/Features 00:20:46.812 ================================ 00:20:46.812 Vendor ID: 0000 00:20:46.812 Subsystem Vendor ID: 0000 00:20:46.812 Serial Number: a3ec32d27260031a9768 00:20:46.812 Model Number: Linux 00:20:46.812 Firmware Version: 6.7.0-68 00:20:46.812 Recommended Arb Burst: 0 00:20:46.812 IEEE OUI Identifier: 00 00 00 00:20:46.812 Multi-path I/O 00:20:46.812 May have multiple subsystem ports: No 00:20:46.812 May have multiple controllers: No 00:20:46.812 Associated with SR-IOV VF: No 00:20:46.812 Max Data Transfer Size: Unlimited 00:20:46.812 Max Number of Namespaces: 0 00:20:46.812 Max Number of I/O Queues: 1024 00:20:46.812 NVMe Specification Version (VS): 1.3 00:20:46.812 NVMe Specification Version (Identify): 1.3 00:20:46.812 Maximum Queue Entries: 1024 00:20:46.812 Contiguous Queues Required: No 00:20:46.812 Arbitration Mechanisms Supported 00:20:46.812 Weighted Round Robin: Not Supported 00:20:46.812 Vendor Specific: Not Supported 00:20:46.812 Reset Timeout: 7500 ms 00:20:46.812 Doorbell Stride: 4 bytes 00:20:46.812 NVM Subsystem Reset: Not Supported 00:20:46.812 Command Sets Supported 00:20:46.812 NVM Command Set: Supported 00:20:46.813 Boot Partition: Not Supported 00:20:46.813 Memory Page Size Minimum: 4096 bytes 00:20:46.813 Memory Page Size Maximum: 4096 bytes 00:20:46.813 Persistent Memory Region: Not Supported 00:20:46.813 Optional Asynchronous Events Supported 00:20:46.813 Namespace Attribute Notices: Not Supported 00:20:46.813 Firmware Activation Notices: Not Supported 00:20:46.813 ANA Change Notices: Not Supported 00:20:46.813 PLE Aggregate Log Change Notices: Not Supported 00:20:46.813 LBA Status Info Alert Notices: Not Supported 00:20:46.813 EGE Aggregate Log Change Notices: Not Supported 00:20:46.813 Normal NVM Subsystem Shutdown event: Not Supported 00:20:46.813 Zone Descriptor Change Notices: Not Supported 00:20:46.813 Discovery Log Change Notices: Supported 00:20:46.813 Controller Attributes 00:20:46.813 128-bit Host Identifier: Not Supported 00:20:46.813 Non-Operational Permissive Mode: Not Supported 00:20:46.813 NVM Sets: Not Supported 00:20:46.813 Read Recovery Levels: Not Supported 00:20:46.813 Endurance Groups: Not Supported 00:20:46.813 Predictable Latency Mode: Not Supported 00:20:46.813 Traffic Based Keep ALive: Not Supported 00:20:46.813 Namespace Granularity: Not Supported 00:20:46.813 SQ Associations: Not Supported 00:20:46.813 UUID List: Not Supported 00:20:46.813 Multi-Domain Subsystem: Not Supported 00:20:46.813 Fixed Capacity Management: Not Supported 00:20:46.813 Variable Capacity Management: Not Supported 00:20:46.813 Delete Endurance Group: Not Supported 00:20:46.813 Delete NVM Set: Not Supported 00:20:46.813 Extended LBA Formats Supported: Not Supported 00:20:46.813 Flexible Data Placement Supported: Not Supported 00:20:46.813 00:20:46.813 Controller Memory Buffer Support 00:20:46.813 ================================ 00:20:46.813 Supported: No 00:20:46.813 00:20:46.813 Persistent Memory Region Support 00:20:46.813 ================================ 00:20:46.813 Supported: No 00:20:46.813 00:20:46.813 Admin Command Set Attributes 00:20:46.813 ============================ 00:20:46.813 Security Send/Receive: Not Supported 00:20:46.813 Format NVM: Not Supported 00:20:46.813 Firmware Activate/Download: Not Supported 00:20:46.813 Namespace Management: Not Supported 00:20:46.813 Device Self-Test: Not Supported 00:20:46.813 Directives: Not Supported 00:20:46.813 NVMe-MI: Not Supported 00:20:46.813 Virtualization Management: Not Supported 00:20:46.813 Doorbell Buffer Config: Not Supported 00:20:46.813 Get LBA Status Capability: Not Supported 00:20:46.813 Command & Feature Lockdown Capability: Not Supported 00:20:46.813 Abort Command Limit: 1 00:20:46.813 Async Event Request Limit: 1 00:20:46.813 Number of Firmware Slots: N/A 00:20:46.813 Firmware Slot 1 Read-Only: N/A 00:20:46.813 Firmware Activation Without Reset: N/A 00:20:46.813 Multiple Update Detection Support: N/A 00:20:46.813 Firmware Update Granularity: No Information Provided 00:20:46.813 Per-Namespace SMART Log: No 00:20:46.813 Asymmetric Namespace Access Log Page: Not Supported 00:20:46.813 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:46.813 Command Effects Log Page: Not Supported 00:20:46.813 Get Log Page Extended Data: Supported 00:20:46.813 Telemetry Log Pages: Not Supported 00:20:46.813 Persistent Event Log Pages: Not Supported 00:20:46.813 Supported Log Pages Log Page: May Support 00:20:46.813 Commands Supported & Effects Log Page: Not Supported 00:20:46.813 Feature Identifiers & Effects Log Page:May Support 00:20:46.813 NVMe-MI Commands & Effects Log Page: May Support 00:20:46.813 Data Area 4 for Telemetry Log: Not Supported 00:20:46.813 Error Log Page Entries Supported: 1 00:20:46.813 Keep Alive: Not Supported 00:20:46.813 00:20:46.813 NVM Command Set Attributes 00:20:46.813 ========================== 00:20:46.813 Submission Queue Entry Size 00:20:46.813 Max: 1 00:20:46.813 Min: 1 00:20:46.813 Completion Queue Entry Size 00:20:46.813 Max: 1 00:20:46.813 Min: 1 00:20:46.813 Number of Namespaces: 0 00:20:46.813 Compare Command: Not Supported 00:20:46.813 Write Uncorrectable Command: Not Supported 00:20:46.813 Dataset Management Command: Not Supported 00:20:46.813 Write Zeroes Command: Not Supported 00:20:46.813 Set Features Save Field: Not Supported 00:20:46.813 Reservations: Not Supported 00:20:46.813 Timestamp: Not Supported 00:20:46.813 Copy: Not Supported 00:20:46.813 Volatile Write Cache: Not Present 00:20:46.813 Atomic Write Unit (Normal): 1 00:20:46.813 Atomic Write Unit (PFail): 1 00:20:46.813 Atomic Compare & Write Unit: 1 00:20:46.813 Fused Compare & Write: Not Supported 00:20:46.813 Scatter-Gather List 00:20:46.813 SGL Command Set: Supported 00:20:46.813 SGL Keyed: Not Supported 00:20:46.813 SGL Bit Bucket Descriptor: Not Supported 00:20:46.813 SGL Metadata Pointer: Not Supported 00:20:46.813 Oversized SGL: Not Supported 00:20:46.813 SGL Metadata Address: Not Supported 00:20:46.813 SGL Offset: Supported 00:20:46.813 Transport SGL Data Block: Not Supported 00:20:46.813 Replay Protected Memory Block: Not Supported 00:20:46.813 00:20:46.813 Firmware Slot Information 00:20:46.813 ========================= 00:20:46.813 Active slot: 0 00:20:46.813 00:20:46.813 00:20:46.813 Error Log 00:20:46.813 ========= 00:20:46.813 00:20:46.813 Active Namespaces 00:20:46.813 ================= 00:20:46.813 Discovery Log Page 00:20:46.813 ================== 00:20:46.813 Generation Counter: 2 00:20:46.813 Number of Records: 2 00:20:46.813 Record Format: 0 00:20:46.813 00:20:46.813 Discovery Log Entry 0 00:20:46.813 ---------------------- 00:20:46.813 Transport Type: 3 (TCP) 00:20:46.813 Address Family: 1 (IPv4) 00:20:46.813 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:46.813 Entry Flags: 00:20:46.813 Duplicate Returned Information: 0 00:20:46.813 Explicit Persistent Connection Support for Discovery: 0 00:20:46.813 Transport Requirements: 00:20:46.813 Secure Channel: Not Specified 00:20:46.813 Port ID: 1 (0x0001) 00:20:46.813 Controller ID: 65535 (0xffff) 00:20:46.813 Admin Max SQ Size: 32 00:20:46.813 Transport Service Identifier: 4420 00:20:46.813 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:46.813 Transport Address: 10.0.0.1 00:20:46.813 Discovery Log Entry 1 00:20:46.813 ---------------------- 00:20:46.813 Transport Type: 3 (TCP) 00:20:46.813 Address Family: 1 (IPv4) 00:20:46.813 Subsystem Type: 2 (NVM Subsystem) 00:20:46.813 Entry Flags: 00:20:46.813 Duplicate Returned Information: 0 00:20:46.813 Explicit Persistent Connection Support for Discovery: 0 00:20:46.813 Transport Requirements: 00:20:46.813 Secure Channel: Not Specified 00:20:46.813 Port ID: 1 (0x0001) 00:20:46.813 Controller ID: 65535 (0xffff) 00:20:46.813 Admin Max SQ Size: 32 00:20:46.813 Transport Service Identifier: 4420 00:20:46.813 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:46.813 Transport Address: 10.0.0.1 00:20:46.813 05:26:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:47.073 get_feature(0x01) failed 00:20:47.073 get_feature(0x02) failed 00:20:47.073 get_feature(0x04) failed 00:20:47.073 ===================================================== 00:20:47.073 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:47.073 ===================================================== 00:20:47.073 Controller Capabilities/Features 00:20:47.073 ================================ 00:20:47.073 Vendor ID: 0000 00:20:47.073 Subsystem Vendor ID: 0000 00:20:47.073 Serial Number: 3276ac0c34fc57305b96 00:20:47.073 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:47.073 Firmware Version: 6.7.0-68 00:20:47.073 Recommended Arb Burst: 6 00:20:47.073 IEEE OUI Identifier: 00 00 00 00:20:47.073 Multi-path I/O 00:20:47.073 May have multiple subsystem ports: Yes 00:20:47.073 May have multiple controllers: Yes 00:20:47.073 Associated with SR-IOV VF: No 00:20:47.073 Max Data Transfer Size: Unlimited 00:20:47.073 Max Number of Namespaces: 1024 00:20:47.073 Max Number of I/O Queues: 128 00:20:47.073 NVMe Specification Version (VS): 1.3 00:20:47.073 NVMe Specification Version (Identify): 1.3 00:20:47.073 Maximum Queue Entries: 1024 00:20:47.073 Contiguous Queues Required: No 00:20:47.073 Arbitration Mechanisms Supported 00:20:47.073 Weighted Round Robin: Not Supported 00:20:47.073 Vendor Specific: Not Supported 00:20:47.073 Reset Timeout: 7500 ms 00:20:47.073 Doorbell Stride: 4 bytes 00:20:47.073 NVM Subsystem Reset: Not Supported 00:20:47.073 Command Sets Supported 00:20:47.073 NVM Command Set: Supported 00:20:47.073 Boot Partition: Not Supported 00:20:47.073 Memory Page Size Minimum: 4096 bytes 00:20:47.073 Memory Page Size Maximum: 4096 bytes 00:20:47.073 Persistent Memory Region: Not Supported 00:20:47.073 Optional Asynchronous Events Supported 00:20:47.073 Namespace Attribute Notices: Supported 00:20:47.073 Firmware Activation Notices: Not Supported 00:20:47.073 ANA Change Notices: Supported 00:20:47.073 PLE Aggregate Log Change Notices: Not Supported 00:20:47.073 LBA Status Info Alert Notices: Not Supported 00:20:47.073 EGE Aggregate Log Change Notices: Not Supported 00:20:47.073 Normal NVM Subsystem Shutdown event: Not Supported 00:20:47.073 Zone Descriptor Change Notices: Not Supported 00:20:47.073 Discovery Log Change Notices: Not Supported 00:20:47.073 Controller Attributes 00:20:47.073 128-bit Host Identifier: Supported 00:20:47.073 Non-Operational Permissive Mode: Not Supported 00:20:47.073 NVM Sets: Not Supported 00:20:47.073 Read Recovery Levels: Not Supported 00:20:47.073 Endurance Groups: Not Supported 00:20:47.073 Predictable Latency Mode: Not Supported 00:20:47.073 Traffic Based Keep ALive: Supported 00:20:47.073 Namespace Granularity: Not Supported 00:20:47.073 SQ Associations: Not Supported 00:20:47.073 UUID List: Not Supported 00:20:47.073 Multi-Domain Subsystem: Not Supported 00:20:47.073 Fixed Capacity Management: Not Supported 00:20:47.073 Variable Capacity Management: Not Supported 00:20:47.073 Delete Endurance Group: Not Supported 00:20:47.073 Delete NVM Set: Not Supported 00:20:47.073 Extended LBA Formats Supported: Not Supported 00:20:47.073 Flexible Data Placement Supported: Not Supported 00:20:47.073 00:20:47.073 Controller Memory Buffer Support 00:20:47.073 ================================ 00:20:47.073 Supported: No 00:20:47.073 00:20:47.073 Persistent Memory Region Support 00:20:47.073 ================================ 00:20:47.073 Supported: No 00:20:47.073 00:20:47.073 Admin Command Set Attributes 00:20:47.073 ============================ 00:20:47.073 Security Send/Receive: Not Supported 00:20:47.073 Format NVM: Not Supported 00:20:47.073 Firmware Activate/Download: Not Supported 00:20:47.073 Namespace Management: Not Supported 00:20:47.073 Device Self-Test: Not Supported 00:20:47.073 Directives: Not Supported 00:20:47.073 NVMe-MI: Not Supported 00:20:47.073 Virtualization Management: Not Supported 00:20:47.073 Doorbell Buffer Config: Not Supported 00:20:47.073 Get LBA Status Capability: Not Supported 00:20:47.073 Command & Feature Lockdown Capability: Not Supported 00:20:47.073 Abort Command Limit: 4 00:20:47.073 Async Event Request Limit: 4 00:20:47.073 Number of Firmware Slots: N/A 00:20:47.073 Firmware Slot 1 Read-Only: N/A 00:20:47.073 Firmware Activation Without Reset: N/A 00:20:47.073 Multiple Update Detection Support: N/A 00:20:47.073 Firmware Update Granularity: No Information Provided 00:20:47.073 Per-Namespace SMART Log: Yes 00:20:47.073 Asymmetric Namespace Access Log Page: Supported 00:20:47.073 ANA Transition Time : 10 sec 00:20:47.073 00:20:47.073 Asymmetric Namespace Access Capabilities 00:20:47.073 ANA Optimized State : Supported 00:20:47.073 ANA Non-Optimized State : Supported 00:20:47.073 ANA Inaccessible State : Supported 00:20:47.073 ANA Persistent Loss State : Supported 00:20:47.073 ANA Change State : Supported 00:20:47.073 ANAGRPID is not changed : No 00:20:47.073 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:47.073 00:20:47.073 ANA Group Identifier Maximum : 128 00:20:47.073 Number of ANA Group Identifiers : 128 00:20:47.073 Max Number of Allowed Namespaces : 1024 00:20:47.073 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:47.073 Command Effects Log Page: Supported 00:20:47.073 Get Log Page Extended Data: Supported 00:20:47.073 Telemetry Log Pages: Not Supported 00:20:47.073 Persistent Event Log Pages: Not Supported 00:20:47.073 Supported Log Pages Log Page: May Support 00:20:47.073 Commands Supported & Effects Log Page: Not Supported 00:20:47.073 Feature Identifiers & Effects Log Page:May Support 00:20:47.073 NVMe-MI Commands & Effects Log Page: May Support 00:20:47.073 Data Area 4 for Telemetry Log: Not Supported 00:20:47.073 Error Log Page Entries Supported: 128 00:20:47.073 Keep Alive: Supported 00:20:47.073 Keep Alive Granularity: 1000 ms 00:20:47.073 00:20:47.073 NVM Command Set Attributes 00:20:47.073 ========================== 00:20:47.073 Submission Queue Entry Size 00:20:47.073 Max: 64 00:20:47.073 Min: 64 00:20:47.073 Completion Queue Entry Size 00:20:47.073 Max: 16 00:20:47.073 Min: 16 00:20:47.073 Number of Namespaces: 1024 00:20:47.073 Compare Command: Not Supported 00:20:47.073 Write Uncorrectable Command: Not Supported 00:20:47.073 Dataset Management Command: Supported 00:20:47.073 Write Zeroes Command: Supported 00:20:47.073 Set Features Save Field: Not Supported 00:20:47.073 Reservations: Not Supported 00:20:47.073 Timestamp: Not Supported 00:20:47.074 Copy: Not Supported 00:20:47.074 Volatile Write Cache: Present 00:20:47.074 Atomic Write Unit (Normal): 1 00:20:47.074 Atomic Write Unit (PFail): 1 00:20:47.074 Atomic Compare & Write Unit: 1 00:20:47.074 Fused Compare & Write: Not Supported 00:20:47.074 Scatter-Gather List 00:20:47.074 SGL Command Set: Supported 00:20:47.074 SGL Keyed: Not Supported 00:20:47.074 SGL Bit Bucket Descriptor: Not Supported 00:20:47.074 SGL Metadata Pointer: Not Supported 00:20:47.074 Oversized SGL: Not Supported 00:20:47.074 SGL Metadata Address: Not Supported 00:20:47.074 SGL Offset: Supported 00:20:47.074 Transport SGL Data Block: Not Supported 00:20:47.074 Replay Protected Memory Block: Not Supported 00:20:47.074 00:20:47.074 Firmware Slot Information 00:20:47.074 ========================= 00:20:47.074 Active slot: 0 00:20:47.074 00:20:47.074 Asymmetric Namespace Access 00:20:47.074 =========================== 00:20:47.074 Change Count : 0 00:20:47.074 Number of ANA Group Descriptors : 1 00:20:47.074 ANA Group Descriptor : 0 00:20:47.074 ANA Group ID : 1 00:20:47.074 Number of NSID Values : 1 00:20:47.074 Change Count : 0 00:20:47.074 ANA State : 1 00:20:47.074 Namespace Identifier : 1 00:20:47.074 00:20:47.074 Commands Supported and Effects 00:20:47.074 ============================== 00:20:47.074 Admin Commands 00:20:47.074 -------------- 00:20:47.074 Get Log Page (02h): Supported 00:20:47.074 Identify (06h): Supported 00:20:47.074 Abort (08h): Supported 00:20:47.074 Set Features (09h): Supported 00:20:47.074 Get Features (0Ah): Supported 00:20:47.074 Asynchronous Event Request (0Ch): Supported 00:20:47.074 Keep Alive (18h): Supported 00:20:47.074 I/O Commands 00:20:47.074 ------------ 00:20:47.074 Flush (00h): Supported 00:20:47.074 Write (01h): Supported LBA-Change 00:20:47.074 Read (02h): Supported 00:20:47.074 Write Zeroes (08h): Supported LBA-Change 00:20:47.074 Dataset Management (09h): Supported 00:20:47.074 00:20:47.074 Error Log 00:20:47.074 ========= 00:20:47.074 Entry: 0 00:20:47.074 Error Count: 0x3 00:20:47.074 Submission Queue Id: 0x0 00:20:47.074 Command Id: 0x5 00:20:47.074 Phase Bit: 0 00:20:47.074 Status Code: 0x2 00:20:47.074 Status Code Type: 0x0 00:20:47.074 Do Not Retry: 1 00:20:47.074 Error Location: 0x28 00:20:47.074 LBA: 0x0 00:20:47.074 Namespace: 0x0 00:20:47.074 Vendor Log Page: 0x0 00:20:47.074 ----------- 00:20:47.074 Entry: 1 00:20:47.074 Error Count: 0x2 00:20:47.074 Submission Queue Id: 0x0 00:20:47.074 Command Id: 0x5 00:20:47.074 Phase Bit: 0 00:20:47.074 Status Code: 0x2 00:20:47.074 Status Code Type: 0x0 00:20:47.074 Do Not Retry: 1 00:20:47.074 Error Location: 0x28 00:20:47.074 LBA: 0x0 00:20:47.074 Namespace: 0x0 00:20:47.074 Vendor Log Page: 0x0 00:20:47.074 ----------- 00:20:47.074 Entry: 2 00:20:47.074 Error Count: 0x1 00:20:47.074 Submission Queue Id: 0x0 00:20:47.074 Command Id: 0x4 00:20:47.074 Phase Bit: 0 00:20:47.074 Status Code: 0x2 00:20:47.074 Status Code Type: 0x0 00:20:47.074 Do Not Retry: 1 00:20:47.074 Error Location: 0x28 00:20:47.074 LBA: 0x0 00:20:47.074 Namespace: 0x0 00:20:47.074 Vendor Log Page: 0x0 00:20:47.074 00:20:47.074 Number of Queues 00:20:47.074 ================ 00:20:47.074 Number of I/O Submission Queues: 128 00:20:47.074 Number of I/O Completion Queues: 128 00:20:47.074 00:20:47.074 ZNS Specific Controller Data 00:20:47.074 ============================ 00:20:47.074 Zone Append Size Limit: 0 00:20:47.074 00:20:47.074 00:20:47.074 Active Namespaces 00:20:47.074 ================= 00:20:47.074 get_feature(0x05) failed 00:20:47.074 Namespace ID:1 00:20:47.074 Command Set Identifier: NVM (00h) 00:20:47.074 Deallocate: Supported 00:20:47.074 Deallocated/Unwritten Error: Not Supported 00:20:47.074 Deallocated Read Value: Unknown 00:20:47.074 Deallocate in Write Zeroes: Not Supported 00:20:47.074 Deallocated Guard Field: 0xFFFF 00:20:47.074 Flush: Supported 00:20:47.074 Reservation: Not Supported 00:20:47.074 Namespace Sharing Capabilities: Multiple Controllers 00:20:47.074 Size (in LBAs): 1310720 (5GiB) 00:20:47.074 Capacity (in LBAs): 1310720 (5GiB) 00:20:47.074 Utilization (in LBAs): 1310720 (5GiB) 00:20:47.074 UUID: 4dae84c0-d499-444c-99a4-8ac34a6aa5a5 00:20:47.074 Thin Provisioning: Not Supported 00:20:47.074 Per-NS Atomic Units: Yes 00:20:47.074 Atomic Boundary Size (Normal): 0 00:20:47.074 Atomic Boundary Size (PFail): 0 00:20:47.074 Atomic Boundary Offset: 0 00:20:47.074 NGUID/EUI64 Never Reused: No 00:20:47.074 ANA group ID: 1 00:20:47.074 Namespace Write Protected: No 00:20:47.074 Number of LBA Formats: 1 00:20:47.074 Current LBA Format: LBA Format #00 00:20:47.074 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:47.074 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:47.074 rmmod nvme_tcp 00:20:47.074 rmmod nvme_fabrics 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:47.074 05:26:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:48.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:48.009 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:48.009 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:48.009 ************************************ 00:20:48.009 END TEST nvmf_identify_kernel_target 00:20:48.009 ************************************ 00:20:48.009 00:20:48.009 real 0m2.762s 00:20:48.009 user 0m0.991s 00:20:48.009 sys 0m1.291s 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.009 ************************************ 00:20:48.009 START TEST nvmf_auth_host 00:20:48.009 ************************************ 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:48.009 * Looking for test storage... 00:20:48.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.009 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:48.267 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:48.268 Cannot find device "nvmf_tgt_br" 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:48.268 Cannot find device "nvmf_tgt_br2" 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:48.268 Cannot find device "nvmf_tgt_br" 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:48.268 Cannot find device "nvmf_tgt_br2" 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:48.268 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:48.268 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:48.268 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:48.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:20:48.527 00:20:48.527 --- 10.0.0.2 ping statistics --- 00:20:48.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.527 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:48.527 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:48.527 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:20:48.527 00:20:48.527 --- 10.0.0.3 ping statistics --- 00:20:48.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.527 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:48.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:48.527 00:20:48.527 --- 10.0.0.1 ping statistics --- 00:20:48.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.527 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=90691 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 90691 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 90691 ']' 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:48.527 05:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2eee3572274f58079a7c1101bfab88a0 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.zVm 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2eee3572274f58079a7c1101bfab88a0 0 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2eee3572274f58079a7c1101bfab88a0 0 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2eee3572274f58079a7c1101bfab88a0 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.zVm 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.zVm 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.zVm 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=48cbdd706f9650f0ff17d753aeaddb2103ef1f1f2c6154f9e47dc40239e60da0 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.mH9 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 48cbdd706f9650f0ff17d753aeaddb2103ef1f1f2c6154f9e47dc40239e60da0 3 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 48cbdd706f9650f0ff17d753aeaddb2103ef1f1f2c6154f9e47dc40239e60da0 3 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=48cbdd706f9650f0ff17d753aeaddb2103ef1f1f2c6154f9e47dc40239e60da0 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.mH9 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.mH9 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.mH9 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fa7bbb887fb1216d299543237777cd14175037e45d8c6224 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.EsV 00:20:49.912 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fa7bbb887fb1216d299543237777cd14175037e45d8c6224 0 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fa7bbb887fb1216d299543237777cd14175037e45d8c6224 0 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fa7bbb887fb1216d299543237777cd14175037e45d8c6224 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.EsV 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.EsV 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.EsV 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c176dc462e56e05e4ad4c80f1934c39bf4ac3d934ae87386 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.wCY 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c176dc462e56e05e4ad4c80f1934c39bf4ac3d934ae87386 2 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c176dc462e56e05e4ad4c80f1934c39bf4ac3d934ae87386 2 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c176dc462e56e05e4ad4c80f1934c39bf4ac3d934ae87386 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.wCY 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.wCY 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.wCY 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=69f0ddc19af6144aa1a2f70f088b5d33 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0AQ 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 69f0ddc19af6144aa1a2f70f088b5d33 1 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 69f0ddc19af6144aa1a2f70f088b5d33 1 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=69f0ddc19af6144aa1a2f70f088b5d33 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:20:49.913 05:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0AQ 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0AQ 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.0AQ 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3fc94eb7a801a365a86a37c6efb4d8d5 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Ih6 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3fc94eb7a801a365a86a37c6efb4d8d5 1 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3fc94eb7a801a365a86a37c6efb4d8d5 1 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3fc94eb7a801a365a86a37c6efb4d8d5 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:20:49.913 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Ih6 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Ih6 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Ih6 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=97362106acfd27fe13021e02ccaa228a98be0381cbaf1f32 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.QJR 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 97362106acfd27fe13021e02ccaa228a98be0381cbaf1f32 2 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 97362106acfd27fe13021e02ccaa228a98be0381cbaf1f32 2 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=97362106acfd27fe13021e02ccaa228a98be0381cbaf1f32 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.QJR 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.QJR 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.QJR 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:50.183 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d3367ddfd09cb79019a7149f47b72615 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.rRZ 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d3367ddfd09cb79019a7149f47b72615 0 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d3367ddfd09cb79019a7149f47b72615 0 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d3367ddfd09cb79019a7149f47b72615 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.rRZ 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.rRZ 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.rRZ 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9982e4b8b1dff01536b00ae93ea487a219a28dbf49aed047197ea0b98b72261d 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.R6E 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9982e4b8b1dff01536b00ae93ea487a219a28dbf49aed047197ea0b98b72261d 3 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9982e4b8b1dff01536b00ae93ea487a219a28dbf49aed047197ea0b98b72261d 3 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9982e4b8b1dff01536b00ae93ea487a219a28dbf49aed047197ea0b98b72261d 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.R6E 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.R6E 00:20:50.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.R6E 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 90691 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 90691 ']' 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:50.184 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.442 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zVm 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.mH9 ]] 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mH9 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.EsV 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.wCY ]] 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wCY 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.0AQ 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.443 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Ih6 ]] 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ih6 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.QJR 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.rRZ ]] 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.rRZ 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.R6E 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.701 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:50.702 05:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:50.959 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:50.959 Waiting for block devices as requested 00:20:50.959 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:51.217 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:51.783 No valid GPT data, bailing 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:51.783 No valid GPT data, bailing 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:51.783 No valid GPT data, bailing 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:51.783 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:52.041 No valid GPT data, bailing 00:20:52.041 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:52.041 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:20:52.041 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:20:52.041 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:20:52.041 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:20:52.041 05:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:52.041 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:52.041 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:52.041 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:52.041 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:20:52.041 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:20:52.041 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:20:52.041 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:52.041 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:20:52.041 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:20:52.041 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:20:52.041 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:52.041 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -a 10.0.0.1 -t tcp -s 4420 00:20:52.041 00:20:52.041 Discovery Log Number of Records 2, Generation counter 2 00:20:52.041 =====Discovery Log Entry 0====== 00:20:52.041 trtype: tcp 00:20:52.041 adrfam: ipv4 00:20:52.041 subtype: current discovery subsystem 00:20:52.041 treq: not specified, sq flow control disable supported 00:20:52.041 portid: 1 00:20:52.041 trsvcid: 4420 00:20:52.041 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:52.041 traddr: 10.0.0.1 00:20:52.041 eflags: none 00:20:52.041 sectype: none 00:20:52.041 =====Discovery Log Entry 1====== 00:20:52.041 trtype: tcp 00:20:52.041 adrfam: ipv4 00:20:52.041 subtype: nvme subsystem 00:20:52.041 treq: not specified, sq flow control disable supported 00:20:52.041 portid: 1 00:20:52.041 trsvcid: 4420 00:20:52.041 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:52.041 traddr: 10.0.0.1 00:20:52.041 eflags: none 00:20:52.041 sectype: none 00:20:52.041 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:52.041 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:52.041 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:52.041 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: ]] 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.042 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.300 nvme0n1 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: ]] 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:52.300 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.301 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.301 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.301 nvme0n1 00:20:52.301 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.301 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: ]] 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.559 nvme0n1 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:20:52.559 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.560 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:52.560 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:20:52.560 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: ]] 00:20:52.560 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:20:52.560 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:52.560 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.560 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.560 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:52.560 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:52.560 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.560 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:52.560 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.560 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.818 nvme0n1 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: ]] 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.818 05:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.077 nvme0n1 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.077 nvme0n1 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.077 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.078 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.344 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.344 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.344 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.344 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.344 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.344 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.344 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.344 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:53.344 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.344 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.344 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:53.344 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:53.344 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:20:53.344 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:20:53.344 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.345 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: ]] 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.603 nvme0n1 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.603 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.861 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: ]] 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.862 nvme0n1 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.862 05:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: ]] 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.862 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.121 nvme0n1 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: ]] 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.121 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.380 nvme0n1 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.380 nvme0n1 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.380 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.381 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.381 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.381 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.381 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.638 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.639 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.639 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.639 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.639 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.639 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.639 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.639 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:54.639 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.639 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.639 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:54.639 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:54.639 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:20:54.639 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:20:54.639 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.639 05:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: ]] 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.206 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.465 nvme0n1 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: ]] 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.465 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.466 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:55.466 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.466 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:55.466 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:55.466 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:55.466 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.466 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.466 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.724 nvme0n1 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: ]] 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.724 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.983 nvme0n1 00:20:55.983 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.983 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.983 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.983 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.983 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.984 05:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: ]] 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.984 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.247 nvme0n1 00:20:56.247 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.247 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.247 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.247 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.247 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.247 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.247 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.247 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.247 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.248 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.506 nvme0n1 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:56.506 05:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: ]] 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.406 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.664 nvme0n1 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:20:58.664 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: ]] 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.665 05:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.231 nvme0n1 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: ]] 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.231 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.794 nvme0n1 00:20:59.794 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.794 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.794 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.794 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: ]] 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.795 05:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.053 nvme0n1 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:21:00.053 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.054 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.620 nvme0n1 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: ]] 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.620 05:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.186 nvme0n1 00:21:01.186 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.186 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.186 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.186 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.186 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.186 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.186 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.186 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.186 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.186 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.444 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.444 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.444 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:21:01.444 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.444 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:01.444 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:01.444 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:01.444 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:01.444 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:01.444 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:01.444 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:01.444 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:01.444 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: ]] 00:21:01.444 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:01.444 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.445 05:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.011 nvme0n1 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: ]] 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.011 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.012 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:02.012 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:02.012 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:02.012 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.012 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.012 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:02.012 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.012 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:02.012 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:02.012 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:02.012 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.012 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.012 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.577 nvme0n1 00:21:02.577 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.577 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.577 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.577 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.577 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.577 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: ]] 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.837 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:02.838 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:02.838 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:02.838 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.838 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.838 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:02.838 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.838 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:02.838 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:02.838 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:02.838 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:02.838 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.838 05:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.405 nvme0n1 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.405 05:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.340 nvme0n1 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: ]] 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.340 nvme0n1 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: ]] 00:21:04.340 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.341 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.599 nvme0n1 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: ]] 00:21:04.599 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.600 nvme0n1 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.600 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: ]] 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.860 nvme0n1 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.860 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:04.861 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:04.861 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:04.861 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.861 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.861 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:04.861 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.861 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:04.861 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:04.861 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:04.861 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:04.861 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.861 05:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.120 nvme0n1 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: ]] 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.120 nvme0n1 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.120 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: ]] 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.379 nvme0n1 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: ]] 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.379 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.638 nvme0n1 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: ]] 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.638 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.897 nvme0n1 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.897 05:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.897 nvme0n1 00:21:05.897 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.897 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.897 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.897 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.897 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: ]] 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.156 nvme0n1 00:21:06.156 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: ]] 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.462 nvme0n1 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.462 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: ]] 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.721 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:06.722 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:06.722 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:06.722 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.722 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.722 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.722 nvme0n1 00:21:06.722 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.722 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.722 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.722 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.722 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.722 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.980 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.980 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.980 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.980 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.980 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: ]] 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.981 05:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.981 nvme0n1 00:21:06.981 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.981 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.981 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.981 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.981 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.981 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:07.239 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.240 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.498 nvme0n1 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: ]] 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.498 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.755 nvme0n1 00:21:07.755 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.755 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.755 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.755 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.755 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.755 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: ]] 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.756 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.014 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.014 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.014 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:08.014 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:08.014 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:08.014 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.014 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.014 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:08.014 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.014 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:08.014 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:08.014 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:08.014 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.014 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.014 05:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.288 nvme0n1 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: ]] 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.288 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:08.289 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.289 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:08.289 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:08.289 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:08.289 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.289 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.289 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.855 nvme0n1 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: ]] 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:08.855 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:08.856 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:08.856 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.856 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.856 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:08.856 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.856 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:08.856 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:08.856 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:08.856 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:08.856 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.856 05:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.114 nvme0n1 00:21:09.114 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.114 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.114 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.114 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.114 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.114 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.114 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.114 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.114 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.115 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.684 nvme0n1 00:21:09.684 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.684 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.684 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.684 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.684 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.684 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.684 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.684 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.684 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.684 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.684 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: ]] 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.685 05:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.259 nvme0n1 00:21:10.259 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.259 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.259 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.259 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.259 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.259 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.259 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: ]] 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.260 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.826 nvme0n1 00:21:10.826 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.826 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.826 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.826 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.826 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.826 05:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: ]] 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.085 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.652 nvme0n1 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: ]] 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.652 05:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.218 nvme0n1 00:21:12.218 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.218 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.218 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.218 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.218 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.218 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.477 05:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.044 nvme0n1 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: ]] 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.044 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.303 nvme0n1 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: ]] 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:13.303 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:13.304 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:13.304 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.304 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.304 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.304 nvme0n1 00:21:13.304 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.304 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.304 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.304 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.304 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.304 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.304 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.304 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.304 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.304 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: ]] 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.563 nvme0n1 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.563 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: ]] 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.564 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.823 nvme0n1 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.823 nvme0n1 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.823 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.082 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.082 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.082 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.082 05:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: ]] 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.082 nvme0n1 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: ]] 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:21:14.082 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.083 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.341 nvme0n1 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:14.341 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: ]] 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.342 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.600 nvme0n1 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: ]] 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.600 nvme0n1 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.600 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.858 nvme0n1 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.858 05:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: ]] 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:14.858 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.859 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.115 nvme0n1 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: ]] 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.115 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.373 nvme0n1 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: ]] 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:15.373 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.374 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.632 nvme0n1 00:21:15.632 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: ]] 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.633 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.891 nvme0n1 00:21:15.891 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.891 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.891 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.891 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.891 05:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.891 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.149 nvme0n1 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.149 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: ]] 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.407 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.666 nvme0n1 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: ]] 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.666 05:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.232 nvme0n1 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: ]] 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.232 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.233 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.233 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.233 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:17.233 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:17.233 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:17.233 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.233 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.233 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:17.233 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.233 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:17.233 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:17.233 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:17.233 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.233 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.233 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.490 nvme0n1 00:21:17.490 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.490 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.490 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.490 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.490 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.490 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.490 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.490 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.490 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.490 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: ]] 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.749 05:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.007 nvme0n1 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.007 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.574 nvme0n1 00:21:18.574 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.574 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.574 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.574 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.574 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.574 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVlZTM1NzIyNzRmNTgwNzlhN2MxMTAxYmZhYjg4YTAv9ycc: 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: ]] 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjYmRkNzA2Zjk2NTBmMGZmMTdkNzUzYWVhZGRiMjEwM2VmMWYxZjJjNjE1NGY5ZTQ3ZGM0MDIzOWU2MGRhMLYJFv8=: 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.575 05:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.141 nvme0n1 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:19.141 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: ]] 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.142 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.074 nvme0n1 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:20.074 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlmMGRkYzE5YWY2MTQ0YWExYTJmNzBmMDg4YjVkMzOwxDhJ: 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: ]] 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZjOTRlYjdhODAxYTM2NWE4NmEzN2M2ZWZiNGQ4ZDXjX8FW: 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.075 05:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.640 nvme0n1 00:21:20.640 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.640 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:20.640 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:20.640 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.640 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.640 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.640 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.640 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:20.640 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.640 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.640 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.640 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:20.640 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:20.640 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTczNjIxMDZhY2ZkMjdmZTEzMDIxZTAyY2NhYTIyOGE5OGJlMDM4MWNiYWYxZjMy/zpBxw==: 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: ]] 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzNjdkZGZkMDljYjc5MDE5YTcxNDlmNDdiNzI2MTWuq/Bh: 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.641 05:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.206 nvme0n1 00:21:21.206 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.206 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.206 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.206 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:21.206 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.206 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk4MmU0YjhiMWRmZjAxNTM2YjAwYWU5M2VhNDg3YTIxOWEyOGRiZjQ5YWVkMDQ3MTk3ZWEwYjk4YjcyMjYxZHqAj0Y=: 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.207 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.142 nvme0n1 00:21:22.142 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.142 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.142 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.142 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:22.142 05:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE3YmJiODg3ZmIxMjE2ZDI5OTU0MzIzNzc3N2NkMTQxNzUwMzdlNDVkOGM2MjI0Kl4Scg==: 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: ]] 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3NmRjNDYyZTU2ZTA1ZTRhZDRjODBmMTkzNGMzOWJmNGFjM2Q5MzRhZTg3Mzg29q6Big==: 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:22.142 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.143 2024/07/25 05:27:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:22.143 request: 00:21:22.143 { 00:21:22.143 "method": "bdev_nvme_attach_controller", 00:21:22.143 "params": { 00:21:22.143 "name": "nvme0", 00:21:22.143 "trtype": "tcp", 00:21:22.143 "traddr": "10.0.0.1", 00:21:22.143 "adrfam": "ipv4", 00:21:22.143 "trsvcid": "4420", 00:21:22.143 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:22.143 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:22.143 "prchk_reftag": false, 00:21:22.143 "prchk_guard": false, 00:21:22.143 "hdgst": false, 00:21:22.143 "ddgst": false 00:21:22.143 } 00:21:22.143 } 00:21:22.143 Got JSON-RPC error response 00:21:22.143 GoRPCClient: error on JSON-RPC call 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.143 2024/07/25 05:27:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:22.143 request: 00:21:22.143 { 00:21:22.143 "method": "bdev_nvme_attach_controller", 00:21:22.143 "params": { 00:21:22.143 "name": "nvme0", 00:21:22.143 "trtype": "tcp", 00:21:22.143 "traddr": "10.0.0.1", 00:21:22.143 "adrfam": "ipv4", 00:21:22.143 "trsvcid": "4420", 00:21:22.143 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:22.143 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:22.143 "prchk_reftag": false, 00:21:22.143 "prchk_guard": false, 00:21:22.143 "hdgst": false, 00:21:22.143 "ddgst": false, 00:21:22.143 "dhchap_key": "key2" 00:21:22.143 } 00:21:22.143 } 00:21:22.143 Got JSON-RPC error response 00:21:22.143 GoRPCClient: error on JSON-RPC call 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:22.143 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.144 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.144 2024/07/25 05:27:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:22.144 request: 00:21:22.144 { 00:21:22.144 "method": "bdev_nvme_attach_controller", 00:21:22.144 "params": { 00:21:22.144 "name": "nvme0", 00:21:22.144 "trtype": "tcp", 00:21:22.144 "traddr": "10.0.0.1", 00:21:22.144 "adrfam": "ipv4", 00:21:22.144 "trsvcid": "4420", 00:21:22.144 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:22.144 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:22.144 "prchk_reftag": false, 00:21:22.144 "prchk_guard": false, 00:21:22.144 "hdgst": false, 00:21:22.144 "ddgst": false, 00:21:22.144 "dhchap_key": "key1", 00:21:22.144 "dhchap_ctrlr_key": "ckey2" 00:21:22.144 } 00:21:22.144 } 00:21:22.144 Got JSON-RPC error response 00:21:22.144 GoRPCClient: error on JSON-RPC call 00:21:22.144 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:22.144 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:22.144 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:22.144 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:22.144 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:22.144 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:21:22.144 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:21:22.144 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:21:22.144 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:22.144 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:21:22.144 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:22.144 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:21:22.144 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:22.144 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:22.144 rmmod nvme_tcp 00:21:22.402 rmmod nvme_fabrics 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 90691 ']' 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 90691 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 90691 ']' 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 90691 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90691 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:22.402 killing process with pid 90691 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90691' 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 90691 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 90691 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:22.402 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:21:22.660 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:22.660 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:22.660 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:22.660 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:22.660 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:22.660 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:22.660 05:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:23.224 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:23.224 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:23.481 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:23.481 05:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.zVm /tmp/spdk.key-null.EsV /tmp/spdk.key-sha256.0AQ /tmp/spdk.key-sha384.QJR /tmp/spdk.key-sha512.R6E /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:21:23.481 05:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:23.739 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:23.739 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:23.739 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:23.739 00:21:23.739 real 0m35.812s 00:21:23.739 user 0m31.988s 00:21:23.739 sys 0m3.574s 00:21:23.739 05:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:23.739 ************************************ 00:21:23.739 END TEST nvmf_auth_host 00:21:23.739 ************************************ 00:21:23.739 05:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.998 05:27:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:21:23.998 05:27:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:23.998 05:27:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:23.998 05:27:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:23.998 05:27:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.998 ************************************ 00:21:23.998 START TEST nvmf_digest 00:21:23.998 ************************************ 00:21:23.998 05:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:23.998 * Looking for test storage... 00:21:23.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:23.998 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:23.999 Cannot find device "nvmf_tgt_br" 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # true 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:23.999 Cannot find device "nvmf_tgt_br2" 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # true 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:23.999 Cannot find device "nvmf_tgt_br" 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:23.999 Cannot find device "nvmf_tgt_br2" 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:23.999 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:24.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:24.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:24.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:21:24.257 00:21:24.257 --- 10.0.0.2 ping statistics --- 00:21:24.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.257 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:24.257 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:24.257 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:21:24.257 00:21:24.257 --- 10.0.0.3 ping statistics --- 00:21:24.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.257 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:24.257 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:24.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:21:24.257 00:21:24.258 --- 10.0.0.1 ping statistics --- 00:21:24.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.258 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:24.258 ************************************ 00:21:24.258 START TEST nvmf_digest_clean 00:21:24.258 ************************************ 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:24.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=92291 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 92291 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92291 ']' 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:24.258 05:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:24.516 [2024-07-25 05:27:28.480039] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:21:24.516 [2024-07-25 05:27:28.480139] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.516 [2024-07-25 05:27:28.620758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.774 [2024-07-25 05:27:28.699794] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.774 [2024-07-25 05:27:28.699858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.775 [2024-07-25 05:27:28.699874] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.775 [2024-07-25 05:27:28.699884] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.775 [2024-07-25 05:27:28.699893] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.775 [2024-07-25 05:27:28.699941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.341 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:25.341 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:25.341 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:25.341 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:25.341 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:25.599 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:25.600 null0 00:21:25.600 [2024-07-25 05:27:29.606082] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.600 [2024-07-25 05:27:29.630199] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:25.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92341 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92341 /var/tmp/bperf.sock 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92341 ']' 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:25.600 05:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:25.600 [2024-07-25 05:27:29.686849] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:21:25.600 [2024-07-25 05:27:29.686938] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92341 ] 00:21:25.859 [2024-07-25 05:27:29.821068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.859 [2024-07-25 05:27:29.891454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.803 05:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:26.803 05:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:26.803 05:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:26.803 05:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:26.803 05:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:26.803 05:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:26.803 05:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:27.075 nvme0n1 00:21:27.333 05:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:27.333 05:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:27.333 Running I/O for 2 seconds... 00:21:29.234 00:21:29.234 Latency(us) 00:21:29.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.234 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:29.234 nvme0n1 : 2.00 18337.09 71.63 0.00 0.00 6972.08 3842.79 13047.62 00:21:29.234 =================================================================================================================== 00:21:29.234 Total : 18337.09 71.63 0.00 0.00 6972.08 3842.79 13047.62 00:21:29.234 0 00:21:29.234 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:29.234 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:29.234 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:29.234 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:29.234 | select(.opcode=="crc32c") 00:21:29.234 | "\(.module_name) \(.executed)"' 00:21:29.234 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92341 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92341 ']' 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92341 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92341 00:21:29.800 killing process with pid 92341 00:21:29.800 Received shutdown signal, test time was about 2.000000 seconds 00:21:29.800 00:21:29.800 Latency(us) 00:21:29.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.800 =================================================================================================================== 00:21:29.800 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92341' 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92341 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92341 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92431 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92431 /var/tmp/bperf.sock 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92431 ']' 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:29.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:29.800 05:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:29.800 [2024-07-25 05:27:33.929438] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:21:29.800 [2024-07-25 05:27:33.929721] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92431 ] 00:21:29.800 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:29.800 Zero copy mechanism will not be used. 00:21:30.058 [2024-07-25 05:27:34.070636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.058 [2024-07-25 05:27:34.140256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.992 05:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:30.992 05:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:30.992 05:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:30.992 05:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:30.992 05:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:31.251 05:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:31.251 05:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:31.509 nvme0n1 00:21:31.509 05:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:31.509 05:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:31.509 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:31.509 Zero copy mechanism will not be used. 00:21:31.509 Running I/O for 2 seconds... 00:21:34.040 00:21:34.040 Latency(us) 00:21:34.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.040 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:34.040 nvme0n1 : 2.00 7401.67 925.21 0.00 0.00 2157.60 666.53 11081.54 00:21:34.040 =================================================================================================================== 00:21:34.040 Total : 7401.67 925.21 0.00 0.00 2157.60 666.53 11081.54 00:21:34.040 0 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:34.040 | select(.opcode=="crc32c") 00:21:34.040 | "\(.module_name) \(.executed)"' 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92431 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92431 ']' 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92431 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92431 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:34.040 killing process with pid 92431 00:21:34.040 05:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92431' 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92431 00:21:34.040 Received shutdown signal, test time was about 2.000000 seconds 00:21:34.040 00:21:34.040 Latency(us) 00:21:34.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.040 =================================================================================================================== 00:21:34.040 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92431 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92522 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92522 /var/tmp/bperf.sock 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92522 ']' 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:34.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:34.040 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:34.040 [2024-07-25 05:27:38.200341] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:21:34.040 [2024-07-25 05:27:38.200423] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92522 ] 00:21:34.298 [2024-07-25 05:27:38.334290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.298 [2024-07-25 05:27:38.392676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.298 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:34.298 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:34.298 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:34.298 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:34.298 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:34.865 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:34.865 05:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:35.123 nvme0n1 00:21:35.123 05:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:35.123 05:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:35.123 Running I/O for 2 seconds... 00:21:37.651 00:21:37.651 Latency(us) 00:21:37.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.651 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:37.651 nvme0n1 : 2.01 21863.12 85.40 0.00 0.00 5848.16 2487.39 11558.17 00:21:37.651 =================================================================================================================== 00:21:37.651 Total : 21863.12 85.40 0.00 0.00 5848.16 2487.39 11558.17 00:21:37.651 0 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:37.651 | select(.opcode=="crc32c") 00:21:37.651 | "\(.module_name) \(.executed)"' 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92522 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92522 ']' 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92522 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92522 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:37.651 killing process with pid 92522 00:21:37.651 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92522' 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92522 00:21:37.652 Received shutdown signal, test time was about 2.000000 seconds 00:21:37.652 00:21:37.652 Latency(us) 00:21:37.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.652 =================================================================================================================== 00:21:37.652 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92522 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92592 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92592 /var/tmp/bperf.sock 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92592 ']' 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:37.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:37.652 05:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:37.652 [2024-07-25 05:27:41.776798] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:21:37.652 [2024-07-25 05:27:41.776895] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92592 ] 00:21:37.652 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:37.652 Zero copy mechanism will not be used. 00:21:37.910 [2024-07-25 05:27:41.914431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.910 [2024-07-25 05:27:41.973042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.910 05:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:37.910 05:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:37.910 05:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:37.910 05:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:37.910 05:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:38.168 05:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:38.168 05:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:38.733 nvme0n1 00:21:38.733 05:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:38.733 05:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:38.733 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:38.733 Zero copy mechanism will not be used. 00:21:38.733 Running I/O for 2 seconds... 00:21:41.262 00:21:41.262 Latency(us) 00:21:41.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.263 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:41.263 nvme0n1 : 2.00 7066.06 883.26 0.00 0.00 2258.64 1779.90 11439.01 00:21:41.263 =================================================================================================================== 00:21:41.263 Total : 7066.06 883.26 0.00 0.00 2258.64 1779.90 11439.01 00:21:41.263 0 00:21:41.263 05:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:41.263 05:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:41.263 05:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:41.263 05:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:41.263 05:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:41.263 | select(.opcode=="crc32c") 00:21:41.263 | "\(.module_name) \(.executed)"' 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92592 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92592 ']' 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92592 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92592 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:41.263 killing process with pid 92592 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92592' 00:21:41.263 Received shutdown signal, test time was about 2.000000 seconds 00:21:41.263 00:21:41.263 Latency(us) 00:21:41.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.263 =================================================================================================================== 00:21:41.263 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92592 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92592 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 92291 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92291 ']' 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92291 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92291 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:41.263 killing process with pid 92291 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92291' 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92291 00:21:41.263 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92291 00:21:41.520 00:21:41.520 real 0m17.077s 00:21:41.520 user 0m32.968s 00:21:41.520 sys 0m4.133s 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:41.520 ************************************ 00:21:41.520 END TEST nvmf_digest_clean 00:21:41.520 ************************************ 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:41.520 ************************************ 00:21:41.520 START TEST nvmf_digest_error 00:21:41.520 ************************************ 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=92694 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 92694 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 92694 ']' 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:41.520 05:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:41.521 [2024-07-25 05:27:45.602785] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:21:41.521 [2024-07-25 05:27:45.602890] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.778 [2024-07-25 05:27:45.741919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.778 [2024-07-25 05:27:45.810430] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.778 [2024-07-25 05:27:45.810491] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.778 [2024-07-25 05:27:45.810515] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.778 [2024-07-25 05:27:45.810525] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.778 [2024-07-25 05:27:45.810534] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.778 [2024-07-25 05:27:45.810564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:42.711 [2024-07-25 05:27:46.671092] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:42.711 null0 00:21:42.711 [2024-07-25 05:27:46.743892] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.711 [2024-07-25 05:27:46.768028] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=92738 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 92738 /var/tmp/bperf.sock 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 92738 ']' 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:42.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:42.711 05:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:42.711 [2024-07-25 05:27:46.828868] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:21:42.711 [2024-07-25 05:27:46.828994] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92738 ] 00:21:42.968 [2024-07-25 05:27:46.967336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.968 [2024-07-25 05:27:47.061410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.901 05:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.901 05:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:43.901 05:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:43.901 05:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:44.159 05:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:44.159 05:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.159 05:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:44.159 05:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.159 05:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:44.159 05:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:44.417 nvme0n1 00:21:44.417 05:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:44.417 05:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.417 05:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:44.417 05:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.417 05:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:44.417 05:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:44.675 Running I/O for 2 seconds... 00:21:44.675 [2024-07-25 05:27:48.662288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.675 [2024-07-25 05:27:48.662344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-25 05:27:48.662360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.675 [2024-07-25 05:27:48.677418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.675 [2024-07-25 05:27:48.677463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-25 05:27:48.677479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.675 [2024-07-25 05:27:48.689861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.675 [2024-07-25 05:27:48.689907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-25 05:27:48.689922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.675 [2024-07-25 05:27:48.705031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.675 [2024-07-25 05:27:48.705079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-25 05:27:48.705094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.675 [2024-07-25 05:27:48.717375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.675 [2024-07-25 05:27:48.717418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-25 05:27:48.717433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.675 [2024-07-25 05:27:48.732793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.675 [2024-07-25 05:27:48.732839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-25 05:27:48.732855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.675 [2024-07-25 05:27:48.748001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.675 [2024-07-25 05:27:48.748044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-25 05:27:48.748059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.675 [2024-07-25 05:27:48.763010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.675 [2024-07-25 05:27:48.763063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-25 05:27:48.763079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.675 [2024-07-25 05:27:48.774637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.675 [2024-07-25 05:27:48.774680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-25 05:27:48.774695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.675 [2024-07-25 05:27:48.789712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.675 [2024-07-25 05:27:48.789756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-25 05:27:48.789771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.675 [2024-07-25 05:27:48.804377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.675 [2024-07-25 05:27:48.804422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-25 05:27:48.804437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.675 [2024-07-25 05:27:48.820250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.675 [2024-07-25 05:27:48.820293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-25 05:27:48.820309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.675 [2024-07-25 05:27:48.833271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.675 [2024-07-25 05:27:48.833314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-25 05:27:48.833329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.676 [2024-07-25 05:27:48.848033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.676 [2024-07-25 05:27:48.848075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-25 05:27:48.848090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.935 [2024-07-25 05:27:48.863451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.935 [2024-07-25 05:27:48.863494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.935 [2024-07-25 05:27:48.863509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.935 [2024-07-25 05:27:48.877522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.935 [2024-07-25 05:27:48.877566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.935 [2024-07-25 05:27:48.877582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.935 [2024-07-25 05:27:48.891520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.935 [2024-07-25 05:27:48.891567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.936 [2024-07-25 05:27:48.891583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.936 [2024-07-25 05:27:48.906420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.936 [2024-07-25 05:27:48.906468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.936 [2024-07-25 05:27:48.906483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.936 [2024-07-25 05:27:48.919535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.936 [2024-07-25 05:27:48.919579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.936 [2024-07-25 05:27:48.919595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.936 [2024-07-25 05:27:48.934439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.936 [2024-07-25 05:27:48.934482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.936 [2024-07-25 05:27:48.934498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.936 [2024-07-25 05:27:48.946154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.936 [2024-07-25 05:27:48.946198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.936 [2024-07-25 05:27:48.946213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.936 [2024-07-25 05:27:48.960155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.936 [2024-07-25 05:27:48.960200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.936 [2024-07-25 05:27:48.960215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.936 [2024-07-25 05:27:48.974567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.936 [2024-07-25 05:27:48.974611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.936 [2024-07-25 05:27:48.974626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.936 [2024-07-25 05:27:48.989381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.936 [2024-07-25 05:27:48.989423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.936 [2024-07-25 05:27:48.989438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.936 [2024-07-25 05:27:49.004022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.936 [2024-07-25 05:27:49.004074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.936 [2024-07-25 05:27:49.004090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.936 [2024-07-25 05:27:49.017417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.936 [2024-07-25 05:27:49.017460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.936 [2024-07-25 05:27:49.017475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.936 [2024-07-25 05:27:49.033205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.936 [2024-07-25 05:27:49.033250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.936 [2024-07-25 05:27:49.033265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.936 [2024-07-25 05:27:49.045622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.936 [2024-07-25 05:27:49.045666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.936 [2024-07-25 05:27:49.045681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.936 [2024-07-25 05:27:49.060477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.936 [2024-07-25 05:27:49.060523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.936 [2024-07-25 05:27:49.060538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.936 [2024-07-25 05:27:49.075718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.936 [2024-07-25 05:27:49.075760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.936 [2024-07-25 05:27:49.075775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.936 [2024-07-25 05:27:49.088868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.936 [2024-07-25 05:27:49.088911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.936 [2024-07-25 05:27:49.088926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.936 [2024-07-25 05:27:49.101200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:44.936 [2024-07-25 05:27:49.101244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.936 [2024-07-25 05:27:49.101259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.116711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.116755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.116770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.131125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.131167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.131183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.144585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.144629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.144644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.159509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.159555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.159575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.174424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.174467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.174482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.187114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.187159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.187174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.201573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.201622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.201639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.214825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.214869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.214884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.229904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.229948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.229979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.244838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.244882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.244897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.259112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.259171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.259188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.271679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.271722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.271737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.283823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.283868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.283883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.298387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.298432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.298447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.313752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.313796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.313811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.327041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.327092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.327108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.340397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.340441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.340455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.356125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.356171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.356187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.368426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.218 [2024-07-25 05:27:49.368470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.218 [2024-07-25 05:27:49.368485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.218 [2024-07-25 05:27:49.383757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.219 [2024-07-25 05:27:49.383801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.219 [2024-07-25 05:27:49.383817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.395836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.395880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.395895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.411184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.411229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.411244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.424096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.424141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.424156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.438989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.439031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.439046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.453704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.453747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.453762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.466579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.466623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.466638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.481222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.481266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.481280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.495330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.495374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.495389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.511051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.511106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.511121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.525165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.525211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.525226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.535077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.535123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.535138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.550908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.550964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.550980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.565316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.565360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.565376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.579013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.579065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.579084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.593368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.593412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.593427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.605728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.605771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.605786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.620132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.620176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.620192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.633629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.633674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.633690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.477 [2024-07-25 05:27:49.649201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.477 [2024-07-25 05:27:49.649245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.477 [2024-07-25 05:27:49.649260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.736 [2024-07-25 05:27:49.663162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.736 [2024-07-25 05:27:49.663206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.736 [2024-07-25 05:27:49.663221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.736 [2024-07-25 05:27:49.676582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.736 [2024-07-25 05:27:49.676626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.737 [2024-07-25 05:27:49.676641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.737 [2024-07-25 05:27:49.691016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.737 [2024-07-25 05:27:49.691067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.737 [2024-07-25 05:27:49.691084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.737 [2024-07-25 05:27:49.704978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.737 [2024-07-25 05:27:49.705018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.737 [2024-07-25 05:27:49.705033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.737 [2024-07-25 05:27:49.718741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.737 [2024-07-25 05:27:49.718784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.737 [2024-07-25 05:27:49.718800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.737 [2024-07-25 05:27:49.730688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.737 [2024-07-25 05:27:49.730732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.737 [2024-07-25 05:27:49.730747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.737 [2024-07-25 05:27:49.744292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.737 [2024-07-25 05:27:49.744335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.737 [2024-07-25 05:27:49.744349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.737 [2024-07-25 05:27:49.759846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.737 [2024-07-25 05:27:49.759890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.737 [2024-07-25 05:27:49.759905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.737 [2024-07-25 05:27:49.773936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.737 [2024-07-25 05:27:49.774001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.737 [2024-07-25 05:27:49.774017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.737 [2024-07-25 05:27:49.787568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.737 [2024-07-25 05:27:49.787620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.737 [2024-07-25 05:27:49.787636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.737 [2024-07-25 05:27:49.803216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.737 [2024-07-25 05:27:49.803268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.737 [2024-07-25 05:27:49.803284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.737 [2024-07-25 05:27:49.817559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.737 [2024-07-25 05:27:49.817604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.737 [2024-07-25 05:27:49.817619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.737 [2024-07-25 05:27:49.832016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.737 [2024-07-25 05:27:49.832058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.737 [2024-07-25 05:27:49.832073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.737 [2024-07-25 05:27:49.846879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.737 [2024-07-25 05:27:49.846931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.737 [2024-07-25 05:27:49.846947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.737 [2024-07-25 05:27:49.860202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.737 [2024-07-25 05:27:49.860245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.737 [2024-07-25 05:27:49.860260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.737 [2024-07-25 05:27:49.876356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.737 [2024-07-25 05:27:49.876406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.737 [2024-07-25 05:27:49.876422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.737 [2024-07-25 05:27:49.888952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.737 [2024-07-25 05:27:49.889006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.737 [2024-07-25 05:27:49.889022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.737 [2024-07-25 05:27:49.905857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.737 [2024-07-25 05:27:49.905906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.737 [2024-07-25 05:27:49.905921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:49.922081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:49.922131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:49.922147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:49.937210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:49.937258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:49.937274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:49.947807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:49.947879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:49.947894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:49.964973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:49.965036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:49.965052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:49.980360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:49.980403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:49.980420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:49.994733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:49.994793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:49.994824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:50.008584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:50.008629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:50.008645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:50.020901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:50.020962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:50.020979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:50.035078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:50.035134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:50.035149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:50.049534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:50.049596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:50.049627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:50.064211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:50.064268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:50.064299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:50.078863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:50.078907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:50.078922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:50.091126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:50.091170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:50.091185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:50.108543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:50.108592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:50.108608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:50.121318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:50.121374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:50.121404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:50.136652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:50.136699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:50.136715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:50.151883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:50.151933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:50.151949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.996 [2024-07-25 05:27:50.166533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:45.996 [2024-07-25 05:27:50.166588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.996 [2024-07-25 05:27:50.166604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.255 [2024-07-25 05:27:50.179359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.255 [2024-07-25 05:27:50.179404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.255 [2024-07-25 05:27:50.179420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.255 [2024-07-25 05:27:50.195560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.255 [2024-07-25 05:27:50.195608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.255 [2024-07-25 05:27:50.195623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.255 [2024-07-25 05:27:50.210159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.255 [2024-07-25 05:27:50.210209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.255 [2024-07-25 05:27:50.210225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.255 [2024-07-25 05:27:50.222336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.255 [2024-07-25 05:27:50.222382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.255 [2024-07-25 05:27:50.222397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.255 [2024-07-25 05:27:50.238263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.255 [2024-07-25 05:27:50.238315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.255 [2024-07-25 05:27:50.238331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.255 [2024-07-25 05:27:50.252274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.255 [2024-07-25 05:27:50.252320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.255 [2024-07-25 05:27:50.252335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.255 [2024-07-25 05:27:50.267710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.255 [2024-07-25 05:27:50.267773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.255 [2024-07-25 05:27:50.267789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.255 [2024-07-25 05:27:50.280374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.255 [2024-07-25 05:27:50.280420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.255 [2024-07-25 05:27:50.280435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.255 [2024-07-25 05:27:50.296966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.256 [2024-07-25 05:27:50.297017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.256 [2024-07-25 05:27:50.297033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.256 [2024-07-25 05:27:50.309499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.256 [2024-07-25 05:27:50.309558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.256 [2024-07-25 05:27:50.309572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.256 [2024-07-25 05:27:50.325149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.256 [2024-07-25 05:27:50.325198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.256 [2024-07-25 05:27:50.325213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.256 [2024-07-25 05:27:50.339295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.256 [2024-07-25 05:27:50.339346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.256 [2024-07-25 05:27:50.339361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.256 [2024-07-25 05:27:50.354549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.256 [2024-07-25 05:27:50.354597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.256 [2024-07-25 05:27:50.354612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.256 [2024-07-25 05:27:50.365910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.256 [2024-07-25 05:27:50.365967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.256 [2024-07-25 05:27:50.365984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.256 [2024-07-25 05:27:50.380776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.256 [2024-07-25 05:27:50.380826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.256 [2024-07-25 05:27:50.380842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.256 [2024-07-25 05:27:50.395567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.256 [2024-07-25 05:27:50.395617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.256 [2024-07-25 05:27:50.395632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.256 [2024-07-25 05:27:50.410309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.256 [2024-07-25 05:27:50.410363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.256 [2024-07-25 05:27:50.410379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.256 [2024-07-25 05:27:50.423390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.256 [2024-07-25 05:27:50.423433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.256 [2024-07-25 05:27:50.423448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.514 [2024-07-25 05:27:50.438822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.514 [2024-07-25 05:27:50.438870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.514 [2024-07-25 05:27:50.438886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.514 [2024-07-25 05:27:50.454055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.514 [2024-07-25 05:27:50.454101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.514 [2024-07-25 05:27:50.454116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.514 [2024-07-25 05:27:50.468083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.514 [2024-07-25 05:27:50.468130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.514 [2024-07-25 05:27:50.468144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.514 [2024-07-25 05:27:50.482704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.514 [2024-07-25 05:27:50.482750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.514 [2024-07-25 05:27:50.482766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.514 [2024-07-25 05:27:50.497225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.514 [2024-07-25 05:27:50.497271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.514 [2024-07-25 05:27:50.497287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.515 [2024-07-25 05:27:50.509842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.515 [2024-07-25 05:27:50.509888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.515 [2024-07-25 05:27:50.509903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.515 [2024-07-25 05:27:50.524384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.515 [2024-07-25 05:27:50.524429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.515 [2024-07-25 05:27:50.524444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.515 [2024-07-25 05:27:50.539422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.515 [2024-07-25 05:27:50.539470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.515 [2024-07-25 05:27:50.539486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.515 [2024-07-25 05:27:50.555506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.515 [2024-07-25 05:27:50.555552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.515 [2024-07-25 05:27:50.555568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.515 [2024-07-25 05:27:50.569253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.515 [2024-07-25 05:27:50.569302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.515 [2024-07-25 05:27:50.569318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.515 [2024-07-25 05:27:50.582586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.515 [2024-07-25 05:27:50.582646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.515 [2024-07-25 05:27:50.582677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.515 [2024-07-25 05:27:50.597873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.515 [2024-07-25 05:27:50.597928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.515 [2024-07-25 05:27:50.597945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.515 [2024-07-25 05:27:50.613216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.515 [2024-07-25 05:27:50.613262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.515 [2024-07-25 05:27:50.613278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.515 [2024-07-25 05:27:50.628387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.515 [2024-07-25 05:27:50.628436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.515 [2024-07-25 05:27:50.628452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.515 [2024-07-25 05:27:50.640772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5e30) 00:21:46.515 [2024-07-25 05:27:50.640833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.515 [2024-07-25 05:27:50.640865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.515 00:21:46.515 Latency(us) 00:21:46.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.515 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:46.515 nvme0n1 : 2.01 17892.40 69.89 0.00 0.00 7145.29 3991.74 19899.11 00:21:46.515 =================================================================================================================== 00:21:46.515 Total : 17892.40 69.89 0.00 0.00 7145.29 3991.74 19899.11 00:21:46.515 0 00:21:46.515 05:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:46.515 05:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:46.515 05:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:46.515 | .driver_specific 00:21:46.515 | .nvme_error 00:21:46.515 | .status_code 00:21:46.515 | .command_transient_transport_error' 00:21:46.515 05:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:47.081 05:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 140 > 0 )) 00:21:47.081 05:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 92738 00:21:47.081 05:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 92738 ']' 00:21:47.081 05:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 92738 00:21:47.081 05:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:47.081 05:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:47.081 05:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92738 00:21:47.081 05:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:47.081 05:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:47.081 killing process with pid 92738 00:21:47.081 05:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92738' 00:21:47.081 05:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 92738 00:21:47.081 Received shutdown signal, test time was about 2.000000 seconds 00:21:47.081 00:21:47.081 Latency(us) 00:21:47.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.081 =================================================================================================================== 00:21:47.081 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:47.081 05:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 92738 00:21:47.081 05:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:47.081 05:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:47.081 05:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:47.081 05:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:47.081 05:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:47.081 05:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=92823 00:21:47.081 05:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 92823 /var/tmp/bperf.sock 00:21:47.081 05:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:47.081 05:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 92823 ']' 00:21:47.081 05:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:47.081 05:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:47.081 05:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:47.081 05:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.081 05:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:47.081 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:47.081 Zero copy mechanism will not be used. 00:21:47.081 [2024-07-25 05:27:51.216580] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:21:47.081 [2024-07-25 05:27:51.216677] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92823 ] 00:21:47.339 [2024-07-25 05:27:51.356413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.339 [2024-07-25 05:27:51.426201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.273 05:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:48.273 05:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:48.273 05:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:48.273 05:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:48.531 05:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:48.531 05:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.531 05:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:48.532 05:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.532 05:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:48.532 05:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:48.790 nvme0n1 00:21:48.790 05:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:48.790 05:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.790 05:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:48.790 05:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.790 05:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:48.790 05:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:48.790 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:48.790 Zero copy mechanism will not be used. 00:21:48.790 Running I/O for 2 seconds... 00:21:48.790 [2024-07-25 05:27:52.941017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:48.790 [2024-07-25 05:27:52.941076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.790 [2024-07-25 05:27:52.941092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.790 [2024-07-25 05:27:52.945827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:48.790 [2024-07-25 05:27:52.945872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.790 [2024-07-25 05:27:52.945886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.790 [2024-07-25 05:27:52.950359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:48.790 [2024-07-25 05:27:52.950403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.790 [2024-07-25 05:27:52.950417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.791 [2024-07-25 05:27:52.954179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:48.791 [2024-07-25 05:27:52.954234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.791 [2024-07-25 05:27:52.954248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.791 [2024-07-25 05:27:52.958327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:48.791 [2024-07-25 05:27:52.958380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.791 [2024-07-25 05:27:52.958396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.791 [2024-07-25 05:27:52.962893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:48.791 [2024-07-25 05:27:52.962938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.791 [2024-07-25 05:27:52.962965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:52.967324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:52.967367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:52.967382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:52.971115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:52.971158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:52.971172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:52.975590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:52.975649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:52.975664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:52.979686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:52.979745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:52.979760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:52.984075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:52.984119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:52.984133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:52.988320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:52.988363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:52.988377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:52.992479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:52.992537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:52.992551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:52.996672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:52.996715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:52.996729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:53.001186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:53.001229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:53.001243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:53.005571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:53.005630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:53.005644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:53.009461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:53.009521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:53.009535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:53.014257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:53.014300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:53.014315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:53.018223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:53.018295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:53.018309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:53.021825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:53.021867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:53.021881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:53.026553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:53.026609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:53.026639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:53.030425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:53.030469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:53.030483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:53.034753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:53.034795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:53.034810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:53.039421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:53.039479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:53.039493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:53.043148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:53.043190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:53.043204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:53.047780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:53.047838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:53.047852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:53.051775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:53.051832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:53.051862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:53.056318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:53.056377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:53.056391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:53.059791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:53.059846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.050 [2024-07-25 05:27:53.059876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.050 [2024-07-25 05:27:53.064271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.050 [2024-07-25 05:27:53.064328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.064343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.069369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.069426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.069456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.072703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.072758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.072788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.077170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.077213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.077227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.081745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.081802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.081832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.085829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.085904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.085919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.089457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.089500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.089514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.093977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.094031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.094046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.098016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.098071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.098086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.101881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.101924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.101938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.106506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.106550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.106565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.110936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.110988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.111003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.114787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.114829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.114844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.119090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.119135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.119149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.123646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.123692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.123706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.127982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.128023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.128037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.132260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.132302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.132316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.136584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.136628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.136641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.140056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.140117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.140132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.144946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.145003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.145017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.149969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.150031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.150046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.153184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.153233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.153248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.157968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.158038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.158054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.163282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.163343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.163358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.167887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.167967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.167984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.171252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.171294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.171308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.175740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.175784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.175798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.180559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.180617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.180631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.183810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.183852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.183866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.188803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.188848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.188863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.193899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.193944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.193971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.197523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.197563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.197577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.201601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.201645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.201660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.205689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.205731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.205745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.210117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.210161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.210176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.214103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.214145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.214160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.218755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.218800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.218815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.051 [2024-07-25 05:27:53.223736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.051 [2024-07-25 05:27:53.223780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.051 [2024-07-25 05:27:53.223794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.227048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.227099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.227113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.231400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.231441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.231455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.236158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.236202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.236216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.239725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.239781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.239811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.244006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.244052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.244084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.249313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.249357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.249372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.253784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.253828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.253842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.256607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.256647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.256662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.261748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.261793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.261807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.265015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.265059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.265073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.268969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.269006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.269020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.273125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.273167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.273181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.277627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.277671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.277686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.281799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.281872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.281886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.287097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.287139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.287153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.292404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.292463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.292477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.296218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.296259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.296274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.300488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.300545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.300559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.305624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.305667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.305682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.308967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.309021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.309042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.313555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.313630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.313644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.318279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.318323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.318338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.322036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.322079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.322092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.326237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.326282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.326296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.330746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.330789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.330803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.335145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.335187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.335201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.339139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.339182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.339196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.343714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.343756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.343771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.348236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.348284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.348299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.351978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.352023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.352037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.356928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.356982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.356997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.360623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.360665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.360680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.365260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.365303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.365317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.369588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.369631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.369646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.373244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.373286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.373300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.377671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.377715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.377730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.381903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.381945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.381974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.385574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.385617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.385631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.389801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.389844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-07-25 05:27:53.389859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.321 [2024-07-25 05:27:53.394285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.321 [2024-07-25 05:27:53.394328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.394341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.397774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.397818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.397833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.402450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.402499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.402513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.406522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.406572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.406587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.411146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.411208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.411223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.415174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.415221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.415235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.419325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.419368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.419382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.423249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.423293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.423307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.427246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.427289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.427304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.431703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.431745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.431760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.435037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.435089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.435103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.439607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.439649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.439664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.444201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.444244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.444258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.447785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.447826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.447841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.452125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.452186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.452201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.456552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.456594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.456609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.460682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.460726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.460740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.464438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.464482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.464496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.468899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.468941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.468975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.472818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.472861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.472875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.476729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.476772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.476787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.481378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.481421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.481436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.485061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.485105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.485120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.488920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.488983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.488999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.322 [2024-07-25 05:27:53.493616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.322 [2024-07-25 05:27:53.493661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-07-25 05:27:53.493676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.497660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.497708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.497722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.501515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.501559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.501573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.505330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.505376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.505390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.510077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.510136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.510151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.514708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.514758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.514773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.518163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.518207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.518221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.522889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.522945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.522983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.528084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.528138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.528153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.530905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.530946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.530973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.536254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.536297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.536313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.540030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.540072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.540086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.543973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.544013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.544027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.547854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.547896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.547911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.552277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.552320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.552334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.556828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.556872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.556886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.560297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.560340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.560354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.565003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.565045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.565060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.567823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.567864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.567878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.572460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.572503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.572518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.577481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.577534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.577548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.581035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.581078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.581093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.585406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.585449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.618 [2024-07-25 05:27:53.585463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.618 [2024-07-25 05:27:53.588810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.618 [2024-07-25 05:27:53.588854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.588868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.593198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.593241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.593256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.596418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.596461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.596476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.600630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.600673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.600686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.605273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.605316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.605330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.608762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.608805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.608819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.613742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.613785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.613800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.618589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.618631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.618646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.623641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.623686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.623700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.626533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.626572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.626586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.631542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.631591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.631606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.635106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.635150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.635163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.639465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.639508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.639522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.643893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.643936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.643965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.647635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.647677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.647691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.652535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.652579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.652593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.657471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.657513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.657527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.660922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.660978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.660994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.665529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.665572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.665587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.669939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.670010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.670025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.673211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.673260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.673276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.677508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.677554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.677568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.682017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.682060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.682075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.686337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.686379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.686393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.690283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.690326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.690340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.694468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.694511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.694525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.698564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.698607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.619 [2024-07-25 05:27:53.698621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.619 [2024-07-25 05:27:53.702888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.619 [2024-07-25 05:27:53.702930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.702944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.707348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.707391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.707405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.711555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.711598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.711613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.716020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.716063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.716077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.720236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.720279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.720293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.724108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.724158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.724172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.727892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.727968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.727984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.732441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.732498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.732513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.736398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.736457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.736472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.741372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.741422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.741437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.745155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.745198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.745212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.749504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.749564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.749579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.753192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.753234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.753248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.756940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.756999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.757014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.760905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.760976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.760992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.766137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.766196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.766213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.771138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.771181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.771196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.774765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.774807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.774821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.778865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.778908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.778923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.783414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.783486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.783499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.786869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.786926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.786940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.620 [2024-07-25 05:27:53.790841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.620 [2024-07-25 05:27:53.790900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.620 [2024-07-25 05:27:53.790915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.880 [2024-07-25 05:27:53.794751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.880 [2024-07-25 05:27:53.794794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.880 [2024-07-25 05:27:53.794808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.880 [2024-07-25 05:27:53.799295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.880 [2024-07-25 05:27:53.799338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.880 [2024-07-25 05:27:53.799353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.880 [2024-07-25 05:27:53.803304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.880 [2024-07-25 05:27:53.803347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.880 [2024-07-25 05:27:53.803362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.880 [2024-07-25 05:27:53.806595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.880 [2024-07-25 05:27:53.806636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.880 [2024-07-25 05:27:53.806651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.880 [2024-07-25 05:27:53.811020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.880 [2024-07-25 05:27:53.811077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.880 [2024-07-25 05:27:53.811092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.880 [2024-07-25 05:27:53.816274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.880 [2024-07-25 05:27:53.816315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.880 [2024-07-25 05:27:53.816330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.880 [2024-07-25 05:27:53.819861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.880 [2024-07-25 05:27:53.819919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.880 [2024-07-25 05:27:53.819933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.880 [2024-07-25 05:27:53.824441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.880 [2024-07-25 05:27:53.824498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.880 [2024-07-25 05:27:53.824512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.880 [2024-07-25 05:27:53.829214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.880 [2024-07-25 05:27:53.829256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.880 [2024-07-25 05:27:53.829270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.880 [2024-07-25 05:27:53.833068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.880 [2024-07-25 05:27:53.833108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.880 [2024-07-25 05:27:53.833122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.880 [2024-07-25 05:27:53.837033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.880 [2024-07-25 05:27:53.837075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.880 [2024-07-25 05:27:53.837089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.880 [2024-07-25 05:27:53.841171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.880 [2024-07-25 05:27:53.841227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.880 [2024-07-25 05:27:53.841257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.880 [2024-07-25 05:27:53.846281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.880 [2024-07-25 05:27:53.846341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.880 [2024-07-25 05:27:53.846371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.880 [2024-07-25 05:27:53.850747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.880 [2024-07-25 05:27:53.850808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.880 [2024-07-25 05:27:53.850839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.880 [2024-07-25 05:27:53.854447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.880 [2024-07-25 05:27:53.854508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.854539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.859166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.859213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.859228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.864430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.864472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.864487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.868213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.868254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.868268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.872551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.872608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.872623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.877903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.877947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.877974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.881687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.881743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.881769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.886615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.886670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.886699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.891232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.891274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.891289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.894928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.894981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.894996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.899647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.899703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.899733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.904544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.904601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.904631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.909371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.909427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.909442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.912777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.912818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.912832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.917166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.917209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.917224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.922725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.922781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.922812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.926445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.926499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.926529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.930320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.930405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.930435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.935585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.935641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.935671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.940536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.940591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.940621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.944188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.944231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.944245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.948559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.948615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.948629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.952797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.952840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.952854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.957198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.957240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.957255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.961454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.961494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.961508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.965601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.965681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.965695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.969669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.881 [2024-07-25 05:27:53.969734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.881 [2024-07-25 05:27:53.969749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.881 [2024-07-25 05:27:53.973307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:53.973349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:53.973364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:53.978201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:53.978244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:53.978258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:53.983208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:53.983251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:53.983266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:53.986059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:53.986098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:53.986111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:53.990914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:53.990975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:53.990990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:53.994772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:53.994815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:53.994841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:53.998890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:53.998941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:53.998969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:54.003157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:54.003210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:54.003225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:54.007102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:54.007144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:54.007158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:54.011196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:54.011240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:54.011254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:54.014848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:54.014891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:54.014905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:54.019284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:54.019331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:54.019346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:54.023111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:54.023154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:54.023168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:54.027416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:54.027488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:54.027501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:54.032075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:54.032120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:54.032134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:54.035937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:54.035990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:54.036005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:54.040202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:54.040242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:54.040256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:54.045656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:54.045712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:54.045743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:54.049413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:54.049457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:54.049472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.882 [2024-07-25 05:27:54.053838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:49.882 [2024-07-25 05:27:54.053880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.882 [2024-07-25 05:27:54.053900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.143 [2024-07-25 05:27:54.058622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.143 [2024-07-25 05:27:54.058664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.143 [2024-07-25 05:27:54.058695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.143 [2024-07-25 05:27:54.062020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.143 [2024-07-25 05:27:54.062061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.143 [2024-07-25 05:27:54.062075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.143 [2024-07-25 05:27:54.066543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.143 [2024-07-25 05:27:54.066586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.143 [2024-07-25 05:27:54.066600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.143 [2024-07-25 05:27:54.071156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.143 [2024-07-25 05:27:54.071197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.143 [2024-07-25 05:27:54.071211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.143 [2024-07-25 05:27:54.075435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.143 [2024-07-25 05:27:54.075491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.143 [2024-07-25 05:27:54.075506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.143 [2024-07-25 05:27:54.079403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.143 [2024-07-25 05:27:54.079476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.143 [2024-07-25 05:27:54.079490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.143 [2024-07-25 05:27:54.083897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.143 [2024-07-25 05:27:54.083939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.143 [2024-07-25 05:27:54.083967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.143 [2024-07-25 05:27:54.087665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.143 [2024-07-25 05:27:54.087721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.143 [2024-07-25 05:27:54.087735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.143 [2024-07-25 05:27:54.092263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.143 [2024-07-25 05:27:54.092320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.143 [2024-07-25 05:27:54.092334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.143 [2024-07-25 05:27:54.096470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.143 [2024-07-25 05:27:54.096528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.143 [2024-07-25 05:27:54.096542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.143 [2024-07-25 05:27:54.100667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.143 [2024-07-25 05:27:54.100725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.143 [2024-07-25 05:27:54.100739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.143 [2024-07-25 05:27:54.104848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.143 [2024-07-25 05:27:54.104894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.143 [2024-07-25 05:27:54.104909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.143 [2024-07-25 05:27:54.109178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.143 [2024-07-25 05:27:54.109235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.143 [2024-07-25 05:27:54.109249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.143 [2024-07-25 05:27:54.113712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.143 [2024-07-25 05:27:54.113755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.143 [2024-07-25 05:27:54.113770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.143 [2024-07-25 05:27:54.117602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.143 [2024-07-25 05:27:54.117658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.143 [2024-07-25 05:27:54.117672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.143 [2024-07-25 05:27:54.121649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.143 [2024-07-25 05:27:54.121691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.121706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.126436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.126490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.126506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.130501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.130543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.130557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.135135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.135182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.135196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.138402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.138458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.138473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.142764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.142821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.142836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.146535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.146576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.146590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.150510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.150553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.150567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.156128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.156184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.156198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.159649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.159689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.159703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.164491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.164534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.164547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.168151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.168208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.168222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.172452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.172493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.172508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.176415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.176456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.176470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.180012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.180068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.180100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.184749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.184816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.184847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.188570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.188614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.188628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.193285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.193343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.193357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.144 [2024-07-25 05:27:54.196754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.144 [2024-07-25 05:27:54.196811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.144 [2024-07-25 05:27:54.196840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.201378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.201434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.201465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.205442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.205498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.205528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.209818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.209890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.209904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.214086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.214127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.214141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.217954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.218007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.218022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.223266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.223309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.223324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.228699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.228756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.228771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.233229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.233284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.233315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.236570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.236626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.236656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.241047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.241102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.241132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.244951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.245019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.245034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.248904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.248987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.249002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.253360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.253417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.253431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.257401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.257457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.257471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.261185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.261241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.261256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.264803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.264858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.264887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.269289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.269344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.269373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.273463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.273518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.273549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.278286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.278358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.278372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.281402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.145 [2024-07-25 05:27:54.281458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.145 [2024-07-25 05:27:54.281473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.145 [2024-07-25 05:27:54.285664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.146 [2024-07-25 05:27:54.285711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.146 [2024-07-25 05:27:54.285725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.146 [2024-07-25 05:27:54.291056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.146 [2024-07-25 05:27:54.291112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.146 [2024-07-25 05:27:54.291126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.146 [2024-07-25 05:27:54.294324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.146 [2024-07-25 05:27:54.294411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.146 [2024-07-25 05:27:54.294440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.146 [2024-07-25 05:27:54.298676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.146 [2024-07-25 05:27:54.298732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.146 [2024-07-25 05:27:54.298761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.146 [2024-07-25 05:27:54.303370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.146 [2024-07-25 05:27:54.303427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.146 [2024-07-25 05:27:54.303457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.146 [2024-07-25 05:27:54.307574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.146 [2024-07-25 05:27:54.307629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.146 [2024-07-25 05:27:54.307658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.146 [2024-07-25 05:27:54.311329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.146 [2024-07-25 05:27:54.311371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.146 [2024-07-25 05:27:54.311386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.146 [2024-07-25 05:27:54.315653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.146 [2024-07-25 05:27:54.315723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.146 [2024-07-25 05:27:54.315752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.319643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.319698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.319727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.324046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.324103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.324117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.328030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.328083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.328114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.332741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.332795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.332825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.336460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.336528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.336543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.341041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.341098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.341113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.346604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.346660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.346691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.349842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.349908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.349937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.354592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.354647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.354677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.358486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.358544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.358574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.362962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.363028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.363068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.367049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.367118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.367133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.371324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.371366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.371380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.375863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.375918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.375948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.379379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.379451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.379481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.383562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.383618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.383648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.388398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.388455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.388484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.391957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.392044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.392074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.397062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.397118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.397132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.401803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.401876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.401891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.405988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.406041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.406056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.410152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.410194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.410209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.406 [2024-07-25 05:27:54.414122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.406 [2024-07-25 05:27:54.414163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.406 [2024-07-25 05:27:54.414178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.418239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.418281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.418294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.422321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.422363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.422377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.426559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.426601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.426615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.431152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.431194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.431209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.434832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.434888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.434917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.439022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.439104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.439119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.443395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.443438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.443452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.447460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.447503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.447516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.451894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.451937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.451965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.455915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.455968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.455983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.460756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.460804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.460819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.465871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.465917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.465931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.468715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.468757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.468772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.473798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.473855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.473869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.477912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.477980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.477996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.481952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.482018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.482049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.486459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.486501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.486515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.490073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.490129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.490159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.493916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.493997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.494012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.498542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.498599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.498613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.501963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.502030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.502061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.506221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.506277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.506291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.510587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.510644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.510680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.515405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.515446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.515461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.407 [2024-07-25 05:27:54.518754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.407 [2024-07-25 05:27:54.518810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.407 [2024-07-25 05:27:54.518824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.408 [2024-07-25 05:27:54.523235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.408 [2024-07-25 05:27:54.523278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.408 [2024-07-25 05:27:54.523293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.408 [2024-07-25 05:27:54.527784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.408 [2024-07-25 05:27:54.527826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.408 [2024-07-25 05:27:54.527840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.408 [2024-07-25 05:27:54.531728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.408 [2024-07-25 05:27:54.531771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.408 [2024-07-25 05:27:54.531786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.408 [2024-07-25 05:27:54.536202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.408 [2024-07-25 05:27:54.536243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.408 [2024-07-25 05:27:54.536257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.408 [2024-07-25 05:27:54.540510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.408 [2024-07-25 05:27:54.540552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.408 [2024-07-25 05:27:54.540567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.408 [2024-07-25 05:27:54.544058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.408 [2024-07-25 05:27:54.544099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.408 [2024-07-25 05:27:54.544113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.408 [2024-07-25 05:27:54.548299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.408 [2024-07-25 05:27:54.548341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.408 [2024-07-25 05:27:54.548355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.408 [2024-07-25 05:27:54.552522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.408 [2024-07-25 05:27:54.552564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.408 [2024-07-25 05:27:54.552578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.408 [2024-07-25 05:27:54.556099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.408 [2024-07-25 05:27:54.556139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.408 [2024-07-25 05:27:54.556154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.408 [2024-07-25 05:27:54.560896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.408 [2024-07-25 05:27:54.560939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.408 [2024-07-25 05:27:54.560967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.408 [2024-07-25 05:27:54.565145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.408 [2024-07-25 05:27:54.565187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.408 [2024-07-25 05:27:54.565201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.408 [2024-07-25 05:27:54.568893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.408 [2024-07-25 05:27:54.568935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.408 [2024-07-25 05:27:54.568949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.408 [2024-07-25 05:27:54.572668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.408 [2024-07-25 05:27:54.572710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.408 [2024-07-25 05:27:54.572724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.408 [2024-07-25 05:27:54.576999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.408 [2024-07-25 05:27:54.577040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.408 [2024-07-25 05:27:54.577054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.667 [2024-07-25 05:27:54.581467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.667 [2024-07-25 05:27:54.581509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.581523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.585163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.585203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.585217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.589300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.589342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.589356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.593039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.593080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.593094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.597032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.597073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.597088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.601226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.601269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.601283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.604978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.605019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.605034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.608697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.608739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.608753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.613166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.613209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.613223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.617258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.617300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.617314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.621274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.621315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.621329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.625310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.625352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.625366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.628860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.628902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.628915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.633156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.633198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.633212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.636865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.636908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.636922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.641271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.641314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.641328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.644853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.644895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.644909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.649655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.649697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.649711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.654274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.654316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.654331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.658381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.658423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.658437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.662423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.662465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.662479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.666900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.666967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.666982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.671580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.671621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.671635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.674486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.674528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.674542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.679310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.679354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.679368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.683774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.683818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.683832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.668 [2024-07-25 05:27:54.687283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.668 [2024-07-25 05:27:54.687327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.668 [2024-07-25 05:27:54.687341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.691871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.691914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.691928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.696610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.696660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.696675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.700358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.700402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.700417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.705087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.705134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.705148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.709546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.709605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.709619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.713536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.713577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.713592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.717043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.717087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.717101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.721675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.721718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.721733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.725531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.725574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.725588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.730307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.730351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.730365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.734226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.734268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.734283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.737948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.738007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.738020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.742658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.742717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.742731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.747180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.747227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.747242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.750889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.750932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.750947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.755034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.755085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.755099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.759861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.759918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.759932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.763313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.763354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.763368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.767999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.768043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.768057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.773584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.773643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.773673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.777861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.777903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.777917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.781468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.781511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.781526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.786343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.786385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.786399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.790121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.790163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.790177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.794547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.794589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.794603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.798178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.798220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.669 [2024-07-25 05:27:54.798235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.669 [2024-07-25 05:27:54.802641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.669 [2024-07-25 05:27:54.802684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.670 [2024-07-25 05:27:54.802698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.670 [2024-07-25 05:27:54.806978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.670 [2024-07-25 05:27:54.807021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.670 [2024-07-25 05:27:54.807035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.670 [2024-07-25 05:27:54.812050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.670 [2024-07-25 05:27:54.812094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.670 [2024-07-25 05:27:54.812108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.670 [2024-07-25 05:27:54.815597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.670 [2024-07-25 05:27:54.815641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.670 [2024-07-25 05:27:54.815655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.670 [2024-07-25 05:27:54.819554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.670 [2024-07-25 05:27:54.819598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.670 [2024-07-25 05:27:54.819613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.670 [2024-07-25 05:27:54.823555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.670 [2024-07-25 05:27:54.823595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.670 [2024-07-25 05:27:54.823611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.670 [2024-07-25 05:27:54.827661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.670 [2024-07-25 05:27:54.827704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.670 [2024-07-25 05:27:54.827718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.670 [2024-07-25 05:27:54.831147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.670 [2024-07-25 05:27:54.831189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.670 [2024-07-25 05:27:54.831203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.670 [2024-07-25 05:27:54.835731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.670 [2024-07-25 05:27:54.835778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.670 [2024-07-25 05:27:54.835793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.670 [2024-07-25 05:27:54.839568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.670 [2024-07-25 05:27:54.839612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.670 [2024-07-25 05:27:54.839626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.843559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.843606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.843621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.848166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.848210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.848224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.852645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.852688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.852702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.856395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.856437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.856451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.860609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.860653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.860667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.865349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.865392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.865407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.869661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.869704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.869718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.873412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.873455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.873469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.877485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.877528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.877543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.881593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.881646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.881661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.886505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.886578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.886594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.889833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.889892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.889907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.895119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.895180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.895195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.898630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.898688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.898704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.903386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.903455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.903470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.907302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.907345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.907359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.911692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.911747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.911762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.916702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.916773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.916788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.920625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.920670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.920685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.924580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.924622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.924636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.928837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.928883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.928897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.929 [2024-07-25 05:27:54.933521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2303fd0) 00:21:50.929 [2024-07-25 05:27:54.933564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.929 [2024-07-25 05:27:54.933578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.929 00:21:50.929 Latency(us) 00:21:50.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.929 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:50.930 nvme0n1 : 2.00 7325.97 915.75 0.00 0.00 2179.53 655.36 5987.61 00:21:50.930 =================================================================================================================== 00:21:50.930 Total : 7325.97 915.75 0.00 0.00 2179.53 655.36 5987.61 00:21:50.930 0 00:21:50.930 05:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:50.930 05:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:50.930 05:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:50.930 05:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:50.930 | .driver_specific 00:21:50.930 | .nvme_error 00:21:50.930 | .status_code 00:21:50.930 | .command_transient_transport_error' 00:21:51.188 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 472 > 0 )) 00:21:51.188 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 92823 00:21:51.188 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 92823 ']' 00:21:51.188 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 92823 00:21:51.188 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:51.188 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:51.188 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92823 00:21:51.188 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:51.188 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:51.188 killing process with pid 92823 00:21:51.188 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92823' 00:21:51.188 Received shutdown signal, test time was about 2.000000 seconds 00:21:51.188 00:21:51.188 Latency(us) 00:21:51.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.188 =================================================================================================================== 00:21:51.188 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:51.188 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 92823 00:21:51.188 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 92823 00:21:51.446 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:51.446 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:51.446 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:51.446 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:51.446 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:51.446 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=92919 00:21:51.446 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 92919 /var/tmp/bperf.sock 00:21:51.446 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:51.446 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 92919 ']' 00:21:51.446 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:51.446 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:51.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:51.446 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:51.446 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:51.446 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:51.446 [2024-07-25 05:27:55.464537] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:21:51.446 [2024-07-25 05:27:55.464624] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92919 ] 00:21:51.446 [2024-07-25 05:27:55.596738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.704 [2024-07-25 05:27:55.656259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.704 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:51.704 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:51.704 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:51.704 05:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:51.962 05:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:51.962 05:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.962 05:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:51.962 05:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.962 05:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:51.962 05:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:52.220 nvme0n1 00:21:52.221 05:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:52.221 05:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.221 05:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:52.479 05:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.479 05:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:52.479 05:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:52.479 Running I/O for 2 seconds... 00:21:52.479 [2024-07-25 05:27:56.532280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f6458 00:21:52.479 [2024-07-25 05:27:56.533384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.479 [2024-07-25 05:27:56.533430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:52.479 [2024-07-25 05:27:56.546696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e4de8 00:21:52.479 [2024-07-25 05:27:56.548481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.479 [2024-07-25 05:27:56.548522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:52.479 [2024-07-25 05:27:56.555402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e01f8 00:21:52.479 [2024-07-25 05:27:56.556189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.479 [2024-07-25 05:27:56.556229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:52.479 [2024-07-25 05:27:56.569817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f9b30 00:21:52.479 [2024-07-25 05:27:56.571136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.480 [2024-07-25 05:27:56.571176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:52.480 [2024-07-25 05:27:56.581142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190dfdc0 00:21:52.480 [2024-07-25 05:27:56.582252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.480 [2024-07-25 05:27:56.582291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:52.480 [2024-07-25 05:27:56.592578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f3a28 00:21:52.480 [2024-07-25 05:27:56.593560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.480 [2024-07-25 05:27:56.593597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:52.480 [2024-07-25 05:27:56.603880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ebb98 00:21:52.480 [2024-07-25 05:27:56.604695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.480 [2024-07-25 05:27:56.604735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:52.480 [2024-07-25 05:27:56.618169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e9168 00:21:52.480 [2024-07-25 05:27:56.619805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.480 [2024-07-25 05:27:56.619843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:52.480 [2024-07-25 05:27:56.629371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e7818 00:21:52.480 [2024-07-25 05:27:56.630688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.480 [2024-07-25 05:27:56.630728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:52.480 [2024-07-25 05:27:56.641051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f7970 00:21:52.480 [2024-07-25 05:27:56.642385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.480 [2024-07-25 05:27:56.642424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:52.480 [2024-07-25 05:27:56.652200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190edd58 00:21:52.480 [2024-07-25 05:27:56.653319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.480 [2024-07-25 05:27:56.653358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.663929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f9b30 00:21:52.739 [2024-07-25 05:27:56.664984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.665022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.678320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e5a90 00:21:52.739 [2024-07-25 05:27:56.680059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.680099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.686870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e7818 00:21:52.739 [2024-07-25 05:27:56.687633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.687671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.701244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e1f80 00:21:52.739 [2024-07-25 05:27:56.702656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.702694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.712371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190eff18 00:21:52.739 [2024-07-25 05:27:56.713554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.713594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.724049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ebb98 00:21:52.739 [2024-07-25 05:27:56.725171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.725209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.738359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fb480 00:21:52.739 [2024-07-25 05:27:56.740166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.740205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.746901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e01f8 00:21:52.739 [2024-07-25 05:27:56.747737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.747774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.761266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e3d08 00:21:52.739 [2024-07-25 05:27:56.762774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.762813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.773362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f5378 00:21:52.739 [2024-07-25 05:27:56.774859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.774897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.784749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ec840 00:21:52.739 [2024-07-25 05:27:56.786119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.786155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.796057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f31b8 00:21:52.739 [2024-07-25 05:27:56.797346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.797385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.807713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f0788 00:21:52.739 [2024-07-25 05:27:56.808905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.808943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.819754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e88f8 00:21:52.739 [2024-07-25 05:27:56.820473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.820513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.831181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e88f8 00:21:52.739 [2024-07-25 05:27:56.831785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.831821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.845252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f4298 00:21:52.739 [2024-07-25 05:27:56.846942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.846990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.857350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fd640 00:21:52.739 [2024-07-25 05:27:56.859034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.859102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.867157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ef270 00:21:52.739 [2024-07-25 05:27:56.867904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.867944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.879262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e5220 00:21:52.739 [2024-07-25 05:27:56.880473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.880511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.893564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ecc78 00:21:52.739 [2024-07-25 05:27:56.895463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.895502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:52.739 [2024-07-25 05:27:56.902137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e38d0 00:21:52.739 [2024-07-25 05:27:56.903054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.739 [2024-07-25 05:27:56.903102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:52.998 [2024-07-25 05:27:56.916512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190de038 00:21:52.998 [2024-07-25 05:27:56.918167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.998 [2024-07-25 05:27:56.918206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:52.998 [2024-07-25 05:27:56.928810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f0ff8 00:21:52.998 [2024-07-25 05:27:56.930413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:56.930453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:56.940541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f3a28 00:21:52.999 [2024-07-25 05:27:56.942032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:56.942072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:56.951465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f7100 00:21:52.999 [2024-07-25 05:27:56.953454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:56.953494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:56.964316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e7818 00:21:52.999 [2024-07-25 05:27:56.965315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:56.965355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:56.975085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e7c50 00:21:52.999 [2024-07-25 05:27:56.976211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:56.976249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:56.989478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e7818 00:21:52.999 [2024-07-25 05:27:56.991298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:56.991339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:56.998029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fcdd0 00:21:52.999 [2024-07-25 05:27:56.998836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:56.998874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:57.012389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fd640 00:21:52.999 [2024-07-25 05:27:57.013884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:57.013925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:57.023519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190eea00 00:21:52.999 [2024-07-25 05:27:57.024774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:57.024812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:57.035231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e1b48 00:21:52.999 [2024-07-25 05:27:57.036442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:57.036482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:57.047364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fef90 00:21:52.999 [2024-07-25 05:27:57.048557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:57.048595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:57.058689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fd640 00:21:52.999 [2024-07-25 05:27:57.059748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:57.059786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:57.070011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e12d8 00:21:52.999 [2024-07-25 05:27:57.070877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:57.070915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:57.083508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fac10 00:21:52.999 [2024-07-25 05:27:57.084888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:57.084933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:57.094786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ef6a8 00:21:52.999 [2024-07-25 05:27:57.095911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:57.095968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:57.106491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f2510 00:21:52.999 [2024-07-25 05:27:57.107608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:57.107646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:57.118637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e5220 00:21:52.999 [2024-07-25 05:27:57.119742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:57.119779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:57.132137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ec840 00:21:52.999 [2024-07-25 05:27:57.133724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:57.133763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:57.143329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f96f8 00:21:52.999 [2024-07-25 05:27:57.144690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:57.144728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:57.154979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f3a28 00:21:52.999 [2024-07-25 05:27:57.156267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:57.156305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:52.999 [2024-07-25 05:27:57.169387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fef90 00:21:52.999 [2024-07-25 05:27:57.171372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.999 [2024-07-25 05:27:57.171410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:53.259 [2024-07-25 05:27:57.178024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e4de8 00:21:53.259 [2024-07-25 05:27:57.179053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.259 [2024-07-25 05:27:57.179117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:53.259 [2024-07-25 05:27:57.192502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f0788 00:21:53.259 [2024-07-25 05:27:57.194175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.259 [2024-07-25 05:27:57.194218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:53.259 [2024-07-25 05:27:57.203123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f6cc8 00:21:53.259 [2024-07-25 05:27:57.205150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.259 [2024-07-25 05:27:57.205189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.259 [2024-07-25 05:27:57.215902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ea248 00:21:53.259 [2024-07-25 05:27:57.216990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.259 [2024-07-25 05:27:57.217028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.259 [2024-07-25 05:27:57.227295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f20d8 00:21:53.259 [2024-07-25 05:27:57.228169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.259 [2024-07-25 05:27:57.228211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:53.259 [2024-07-25 05:27:57.238612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190efae0 00:21:53.259 [2024-07-25 05:27:57.239391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.259 [2024-07-25 05:27:57.239432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:53.259 [2024-07-25 05:27:57.249914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f4f40 00:21:53.259 [2024-07-25 05:27:57.250462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.259 [2024-07-25 05:27:57.250498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:53.259 [2024-07-25 05:27:57.263530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e1710 00:21:53.259 [2024-07-25 05:27:57.264896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.259 [2024-07-25 05:27:57.264934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:53.259 [2024-07-25 05:27:57.275923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e23b8 00:21:53.259 [2024-07-25 05:27:57.277623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.259 [2024-07-25 05:27:57.277661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:53.259 [2024-07-25 05:27:57.284480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f3a28 00:21:53.259 [2024-07-25 05:27:57.285212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.259 [2024-07-25 05:27:57.285252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:53.259 [2024-07-25 05:27:57.298839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e88f8 00:21:53.259 [2024-07-25 05:27:57.300277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.259 [2024-07-25 05:27:57.300313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:53.260 [2024-07-25 05:27:57.310032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e99d8 00:21:53.260 [2024-07-25 05:27:57.311243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.260 [2024-07-25 05:27:57.311283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:53.260 [2024-07-25 05:27:57.321693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e38d0 00:21:53.260 [2024-07-25 05:27:57.322645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.260 [2024-07-25 05:27:57.322685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:53.260 [2024-07-25 05:27:57.332686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e3060 00:21:53.260 [2024-07-25 05:27:57.333457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.260 [2024-07-25 05:27:57.333495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:53.260 [2024-07-25 05:27:57.346301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f5378 00:21:53.260 [2024-07-25 05:27:57.347615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.260 [2024-07-25 05:27:57.347651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:53.260 [2024-07-25 05:27:57.358497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f8618 00:21:53.260 [2024-07-25 05:27:57.359778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.260 [2024-07-25 05:27:57.359816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:53.260 [2024-07-25 05:27:57.369851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fcdd0 00:21:53.260 [2024-07-25 05:27:57.370988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.260 [2024-07-25 05:27:57.371024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:53.260 [2024-07-25 05:27:57.384477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e8d30 00:21:53.260 [2024-07-25 05:27:57.386412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.260 [2024-07-25 05:27:57.386452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:53.260 [2024-07-25 05:27:57.393053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ff3c8 00:21:53.260 [2024-07-25 05:27:57.394014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.260 [2024-07-25 05:27:57.394057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:53.260 [2024-07-25 05:27:57.407425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e5ec8 00:21:53.260 [2024-07-25 05:27:57.409075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.260 [2024-07-25 05:27:57.409114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:53.260 [2024-07-25 05:27:57.418593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f1ca0 00:21:53.260 [2024-07-25 05:27:57.420038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.260 [2024-07-25 05:27:57.420075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:53.260 [2024-07-25 05:27:57.430283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f2510 00:21:53.260 [2024-07-25 05:27:57.431662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.260 [2024-07-25 05:27:57.431700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:53.520 [2024-07-25 05:27:57.442499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ec840 00:21:53.520 [2024-07-25 05:27:57.443853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.520 [2024-07-25 05:27:57.443891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:53.520 [2024-07-25 05:27:57.453877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f0bc0 00:21:53.520 [2024-07-25 05:27:57.455117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.520 [2024-07-25 05:27:57.455155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:53.520 [2024-07-25 05:27:57.467857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190eff18 00:21:53.520 [2024-07-25 05:27:57.469696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.520 [2024-07-25 05:27:57.469734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:53.520 [2024-07-25 05:27:57.476430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f0788 00:21:53.520 [2024-07-25 05:27:57.477290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.520 [2024-07-25 05:27:57.477327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:53.520 [2024-07-25 05:27:57.490843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ed4e8 00:21:53.520 [2024-07-25 05:27:57.492398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.520 [2024-07-25 05:27:57.492434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:53.520 [2024-07-25 05:27:57.502019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e3d08 00:21:53.520 [2024-07-25 05:27:57.503349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.520 [2024-07-25 05:27:57.503391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:53.520 [2024-07-25 05:27:57.513721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e73e0 00:21:53.520 [2024-07-25 05:27:57.514975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.520 [2024-07-25 05:27:57.515010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:53.520 [2024-07-25 05:27:57.528117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f5be8 00:21:53.520 [2024-07-25 05:27:57.530028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.520 [2024-07-25 05:27:57.530067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:53.520 [2024-07-25 05:27:57.536659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fd208 00:21:53.520 [2024-07-25 05:27:57.537616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.520 [2024-07-25 05:27:57.537649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:53.520 [2024-07-25 05:27:57.548746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f1ca0 00:21:53.520 [2024-07-25 05:27:57.549686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.520 [2024-07-25 05:27:57.549723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:53.520 [2024-07-25 05:27:57.560090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e73e0 00:21:53.520 [2024-07-25 05:27:57.560876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.520 [2024-07-25 05:27:57.560915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:53.520 [2024-07-25 05:27:57.572384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190de038 00:21:53.520 [2024-07-25 05:27:57.573185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.520 [2024-07-25 05:27:57.573225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:53.520 [2024-07-25 05:27:57.585556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fef90 00:21:53.521 [2024-07-25 05:27:57.586831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.521 [2024-07-25 05:27:57.586869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:53.521 [2024-07-25 05:27:57.596879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ec840 00:21:53.521 [2024-07-25 05:27:57.598016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.521 [2024-07-25 05:27:57.598051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:53.521 [2024-07-25 05:27:57.608225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fd208 00:21:53.521 [2024-07-25 05:27:57.609186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.521 [2024-07-25 05:27:57.609222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:53.521 [2024-07-25 05:27:57.619595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190feb58 00:21:53.521 [2024-07-25 05:27:57.620433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.521 [2024-07-25 05:27:57.620472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:53.521 [2024-07-25 05:27:57.633804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f2948 00:21:53.521 [2024-07-25 05:27:57.634814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.521 [2024-07-25 05:27:57.634854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:53.521 [2024-07-25 05:27:57.645193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ff3c8 00:21:53.521 [2024-07-25 05:27:57.646031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.521 [2024-07-25 05:27:57.646070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:53.521 [2024-07-25 05:27:57.656558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ee190 00:21:53.521 [2024-07-25 05:27:57.657243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.521 [2024-07-25 05:27:57.657282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:53.521 [2024-07-25 05:27:57.670181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190efae0 00:21:53.521 [2024-07-25 05:27:57.671673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.521 [2024-07-25 05:27:57.671710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:53.521 [2024-07-25 05:27:57.681477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f0ff8 00:21:53.521 [2024-07-25 05:27:57.682789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.521 [2024-07-25 05:27:57.682827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:53.521 [2024-07-25 05:27:57.692845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fa7d8 00:21:53.521 [2024-07-25 05:27:57.694040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.521 [2024-07-25 05:27:57.694077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.706439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f7da8 00:21:53.786 [2024-07-25 05:27:57.708156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.708196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.716352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ec408 00:21:53.786 [2024-07-25 05:27:57.717069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.717110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.728547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f3e60 00:21:53.786 [2024-07-25 05:27:57.729562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.729600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.739910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ebfd0 00:21:53.786 [2024-07-25 05:27:57.740743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.740780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.754912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e8088 00:21:53.786 [2024-07-25 05:27:57.756752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.756791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.764741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f1430 00:21:53.786 [2024-07-25 05:27:57.765632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.765672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.776884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f81e0 00:21:53.786 [2024-07-25 05:27:57.778245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.778282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.788025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f8e88 00:21:53.786 [2024-07-25 05:27:57.789149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.789189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.799701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e49b0 00:21:53.786 [2024-07-25 05:27:57.800757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.800794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.814034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f35f0 00:21:53.786 [2024-07-25 05:27:57.815765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.815803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.822689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e5658 00:21:53.786 [2024-07-25 05:27:57.823473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.823511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.837095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ef6a8 00:21:53.786 [2024-07-25 05:27:57.838526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.838563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.848239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e73e0 00:21:53.786 [2024-07-25 05:27:57.849434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.849471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.860220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f4b08 00:21:53.786 [2024-07-25 05:27:57.861399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.861438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.874667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fb8b8 00:21:53.786 [2024-07-25 05:27:57.876503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.876542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.883264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f1868 00:21:53.786 [2024-07-25 05:27:57.884106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.884152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.897596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fe2e8 00:21:53.786 [2024-07-25 05:27:57.899134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.899173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.908755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e8088 00:21:53.786 [2024-07-25 05:27:57.910057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.910095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.920431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190eee38 00:21:53.786 [2024-07-25 05:27:57.921644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.921681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.934776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f2510 00:21:53.786 [2024-07-25 05:27:57.936689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.936726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.943330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ebfd0 00:21:53.786 [2024-07-25 05:27:57.944250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.944288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:53.786 [2024-07-25 05:27:57.957725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f6458 00:21:53.786 [2024-07-25 05:27:57.959364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.786 [2024-07-25 05:27:57.959404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:54.045 [2024-07-25 05:27:57.969127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ea680 00:21:54.045 [2024-07-25 05:27:57.970480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.045 [2024-07-25 05:27:57.970519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:54.045 [2024-07-25 05:27:57.980835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fa3a0 00:21:54.045 [2024-07-25 05:27:57.982156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.045 [2024-07-25 05:27:57.982196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:54.045 [2024-07-25 05:27:57.995233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f1430 00:21:54.045 [2024-07-25 05:27:57.997211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.045 [2024-07-25 05:27:57.997249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:54.045 [2024-07-25 05:27:58.003777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190feb58 00:21:54.045 [2024-07-25 05:27:58.004780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.045 [2024-07-25 05:27:58.004817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:54.045 [2024-07-25 05:27:58.018129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f8e88 00:21:54.045 [2024-07-25 05:27:58.019807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.045 [2024-07-25 05:27:58.019845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:54.045 [2024-07-25 05:27:58.029259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e49b0 00:21:54.045 [2024-07-25 05:27:58.030746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.045 [2024-07-25 05:27:58.030787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:54.045 [2024-07-25 05:27:58.040979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f7970 00:21:54.045 [2024-07-25 05:27:58.042350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.045 [2024-07-25 05:27:58.042387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:54.045 [2024-07-25 05:27:58.052969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190feb58 00:21:54.045 [2024-07-25 05:27:58.053849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.045 [2024-07-25 05:27:58.053889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:54.045 [2024-07-25 05:27:58.064805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ecc78 00:21:54.045 [2024-07-25 05:27:58.066043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.045 [2024-07-25 05:27:58.066080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:54.045 [2024-07-25 05:27:58.076131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190eee38 00:21:54.045 [2024-07-25 05:27:58.077211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.045 [2024-07-25 05:27:58.077249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:54.045 [2024-07-25 05:27:58.090335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190edd58 00:21:54.045 [2024-07-25 05:27:58.092243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.045 [2024-07-25 05:27:58.092282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:54.045 [2024-07-25 05:27:58.098856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e4de8 00:21:54.045 [2024-07-25 05:27:58.099804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.045 [2024-07-25 05:27:58.099842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:54.046 [2024-07-25 05:27:58.110986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e9e10 00:21:54.046 [2024-07-25 05:27:58.111918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.046 [2024-07-25 05:27:58.111965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:54.046 [2024-07-25 05:27:58.124586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e5a90 00:21:54.046 [2024-07-25 05:27:58.126017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.046 [2024-07-25 05:27:58.126055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:54.046 [2024-07-25 05:27:58.135789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f6890 00:21:54.046 [2024-07-25 05:27:58.136945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.046 [2024-07-25 05:27:58.137000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:54.046 [2024-07-25 05:27:58.147473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ef270 00:21:54.046 [2024-07-25 05:27:58.148624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.046 [2024-07-25 05:27:58.148661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:54.046 [2024-07-25 05:27:58.161975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fef90 00:21:54.046 [2024-07-25 05:27:58.163778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.046 [2024-07-25 05:27:58.163817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:54.046 [2024-07-25 05:27:58.170503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f7538 00:21:54.046 [2024-07-25 05:27:58.171349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.046 [2024-07-25 05:27:58.171387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:54.046 [2024-07-25 05:27:58.184947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f0bc0 00:21:54.046 [2024-07-25 05:27:58.186469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.046 [2024-07-25 05:27:58.186507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:54.046 [2024-07-25 05:27:58.196114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e6300 00:21:54.046 [2024-07-25 05:27:58.197372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.046 [2024-07-25 05:27:58.197412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:54.046 [2024-07-25 05:27:58.207779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f0bc0 00:21:54.046 [2024-07-25 05:27:58.208996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.046 [2024-07-25 05:27:58.209032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:54.046 [2024-07-25 05:27:58.219821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e88f8 00:21:54.046 [2024-07-25 05:27:58.220569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.046 [2024-07-25 05:27:58.220608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:54.304 [2024-07-25 05:27:58.231330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fac10 00:21:54.304 [2024-07-25 05:27:58.231938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.304 [2024-07-25 05:27:58.231986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:54.304 [2024-07-25 05:27:58.245047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ed4e8 00:21:54.304 [2024-07-25 05:27:58.246472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.304 [2024-07-25 05:27:58.246512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:54.304 [2024-07-25 05:27:58.256501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e01f8 00:21:54.304 [2024-07-25 05:27:58.257730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.304 [2024-07-25 05:27:58.257769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:54.304 [2024-07-25 05:27:58.267828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fdeb0 00:21:54.304 [2024-07-25 05:27:58.268925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.304 [2024-07-25 05:27:58.268977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:54.304 [2024-07-25 05:27:58.279137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fe720 00:21:54.304 [2024-07-25 05:27:58.280055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.304 [2024-07-25 05:27:58.280095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:54.304 [2024-07-25 05:27:58.291098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fc998 00:21:54.304 [2024-07-25 05:27:58.291692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.304 [2024-07-25 05:27:58.291732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:54.304 [2024-07-25 05:27:58.304763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190e6738 00:21:54.304 [2024-07-25 05:27:58.306182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.304 [2024-07-25 05:27:58.306219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:54.304 [2024-07-25 05:27:58.316121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190efae0 00:21:54.304 [2024-07-25 05:27:58.317388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.304 [2024-07-25 05:27:58.317426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:54.304 [2024-07-25 05:27:58.329308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fa7d8 00:21:54.304 [2024-07-25 05:27:58.331072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.304 [2024-07-25 05:27:58.331111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:54.304 [2024-07-25 05:27:58.340681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190ecc78 00:21:54.304 [2024-07-25 05:27:58.342295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.304 [2024-07-25 05:27:58.342333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:54.304 [2024-07-25 05:27:58.352053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f0ff8 00:21:54.304 [2024-07-25 05:27:58.353484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.304 [2024-07-25 05:27:58.353523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:54.304 [2024-07-25 05:27:58.365589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190eaef0 00:21:54.305 [2024-07-25 05:27:58.367530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.305 [2024-07-25 05:27:58.367567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:54.305 [2024-07-25 05:27:58.374152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f5378 00:21:54.305 [2024-07-25 05:27:58.375116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.305 [2024-07-25 05:27:58.375154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:54.305 [2024-07-25 05:27:58.388597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fc128 00:21:54.305 [2024-07-25 05:27:58.390238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.305 [2024-07-25 05:27:58.390278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:54.305 [2024-07-25 05:27:58.399868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f2d80 00:21:54.305 [2024-07-25 05:27:58.401434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.305 [2024-07-25 05:27:58.401486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:54.305 [2024-07-25 05:27:58.412156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f9b30 00:21:54.305 [2024-07-25 05:27:58.413550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.305 [2024-07-25 05:27:58.413599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:54.305 [2024-07-25 05:27:58.427177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f0bc0 00:21:54.305 [2024-07-25 05:27:58.429282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.305 [2024-07-25 05:27:58.429336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:54.305 [2024-07-25 05:27:58.436090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190fbcf0 00:21:54.305 [2024-07-25 05:27:58.437187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.305 [2024-07-25 05:27:58.437232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:54.305 [2024-07-25 05:27:58.450927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f0788 00:21:54.305 [2024-07-25 05:27:58.452695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.305 [2024-07-25 05:27:58.452741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:54.305 [2024-07-25 05:27:58.459601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f2510 00:21:54.305 [2024-07-25 05:27:58.460350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.305 [2024-07-25 05:27:58.460388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:54.305 [2024-07-25 05:27:58.473965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f4b08 00:21:54.305 [2024-07-25 05:27:58.475393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.305 [2024-07-25 05:27:58.475437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:54.563 [2024-07-25 05:27:58.485477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190eaef0 00:21:54.563 [2024-07-25 05:27:58.486663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.563 [2024-07-25 05:27:58.486709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:54.563 [2024-07-25 05:27:58.497201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f8e88 00:21:54.563 [2024-07-25 05:27:58.498321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.563 [2024-07-25 05:27:58.498359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:54.563 [2024-07-25 05:27:58.509382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5320) with pdu=0x2000190f9f68 00:21:54.563 [2024-07-25 05:27:58.510550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.563 [2024-07-25 05:27:58.510596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:54.563 00:21:54.563 Latency(us) 00:21:54.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.563 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:54.563 nvme0n1 : 2.00 21231.22 82.93 0.00 0.00 6022.86 2517.18 15490.33 00:21:54.563 =================================================================================================================== 00:21:54.563 Total : 21231.22 82.93 0.00 0.00 6022.86 2517.18 15490.33 00:21:54.563 0 00:21:54.563 05:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:54.563 05:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:54.563 05:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:54.563 05:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:54.563 | .driver_specific 00:21:54.563 | .nvme_error 00:21:54.563 | .status_code 00:21:54.563 | .command_transient_transport_error' 00:21:54.821 05:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 166 > 0 )) 00:21:54.822 05:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 92919 00:21:54.822 05:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 92919 ']' 00:21:54.822 05:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 92919 00:21:54.822 05:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:54.822 05:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:54.822 05:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92919 00:21:54.822 killing process with pid 92919 00:21:54.822 Received shutdown signal, test time was about 2.000000 seconds 00:21:54.822 00:21:54.822 Latency(us) 00:21:54.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.822 =================================================================================================================== 00:21:54.822 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.822 05:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:54.822 05:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:54.822 05:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92919' 00:21:54.822 05:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 92919 00:21:54.822 05:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 92919 00:21:55.080 05:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:55.080 05:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:55.080 05:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:55.080 05:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:55.080 05:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:55.080 05:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=92990 00:21:55.080 05:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 92990 /var/tmp/bperf.sock 00:21:55.080 05:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:55.080 05:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 92990 ']' 00:21:55.080 05:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:55.080 05:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:55.080 05:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:55.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:55.080 05:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:55.080 05:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:55.080 [2024-07-25 05:27:59.074075] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:21:55.080 [2024-07-25 05:27:59.074174] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92990 ] 00:21:55.080 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:55.080 Zero copy mechanism will not be used. 00:21:55.080 [2024-07-25 05:27:59.211508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.338 [2024-07-25 05:27:59.271108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.904 05:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:55.904 05:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:55.904 05:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:55.904 05:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:56.162 05:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:56.162 05:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.162 05:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:56.420 05:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.420 05:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:56.420 05:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:56.678 nvme0n1 00:21:56.678 05:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:56.678 05:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.678 05:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:56.678 05:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.678 05:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:56.678 05:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:56.678 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:56.678 Zero copy mechanism will not be used. 00:21:56.678 Running I/O for 2 seconds... 00:21:56.678 [2024-07-25 05:28:00.825914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.678 [2024-07-25 05:28:00.826264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.678 [2024-07-25 05:28:00.826297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.678 [2024-07-25 05:28:00.831293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.678 [2024-07-25 05:28:00.831593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.678 [2024-07-25 05:28:00.831625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.678 [2024-07-25 05:28:00.836842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.678 [2024-07-25 05:28:00.837165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.678 [2024-07-25 05:28:00.837197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.678 [2024-07-25 05:28:00.842139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.678 [2024-07-25 05:28:00.842434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.678 [2024-07-25 05:28:00.842466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.678 [2024-07-25 05:28:00.847452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.678 [2024-07-25 05:28:00.847746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.678 [2024-07-25 05:28:00.847777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.678 [2024-07-25 05:28:00.852806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.678 [2024-07-25 05:28:00.853123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.678 [2024-07-25 05:28:00.853154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.937 [2024-07-25 05:28:00.858440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.937 [2024-07-25 05:28:00.858735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.937 [2024-07-25 05:28:00.858766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.937 [2024-07-25 05:28:00.863760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.937 [2024-07-25 05:28:00.864070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.937 [2024-07-25 05:28:00.864102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.937 [2024-07-25 05:28:00.869093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.937 [2024-07-25 05:28:00.869386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.937 [2024-07-25 05:28:00.869417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.937 [2024-07-25 05:28:00.874487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.937 [2024-07-25 05:28:00.874783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.937 [2024-07-25 05:28:00.874814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.937 [2024-07-25 05:28:00.880139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.937 [2024-07-25 05:28:00.880454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.937 [2024-07-25 05:28:00.880486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.937 [2024-07-25 05:28:00.885600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.937 [2024-07-25 05:28:00.885897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.937 [2024-07-25 05:28:00.885928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.937 [2024-07-25 05:28:00.891171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.937 [2024-07-25 05:28:00.891486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.937 [2024-07-25 05:28:00.891518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.937 [2024-07-25 05:28:00.896513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.937 [2024-07-25 05:28:00.896823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.937 [2024-07-25 05:28:00.896854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.937 [2024-07-25 05:28:00.901800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.937 [2024-07-25 05:28:00.902109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.937 [2024-07-25 05:28:00.902140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.937 [2024-07-25 05:28:00.907172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.937 [2024-07-25 05:28:00.907468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.907499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:00.912471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:00.912769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.912802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:00.917751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:00.918058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.918088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:00.923089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:00.923405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.923436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:00.928399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:00.928693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.928724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:00.933675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:00.933985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.934016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:00.938989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:00.939294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.939326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:00.944252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:00.944546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.944577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:00.949511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:00.949806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.949838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:00.954748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:00.955057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.955097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:00.960033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:00.960335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.960367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:00.965313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:00.965606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.965637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:00.970662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:00.970987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.971018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:00.976009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:00.976304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.976342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:00.981339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:00.981631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.981662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:00.986668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:00.986991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.987023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:00.992003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:00.992299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.992330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:00.997348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:00.997643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:00.997674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:01.002660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:01.002970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:01.003000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:01.008029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:01.008335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:01.008365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:01.013314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:01.013610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:01.013640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:01.018680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:01.019006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:01.019036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:01.024053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:01.024348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:01.024377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:01.029330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:01.029621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:01.029651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:01.034647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:01.034973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:01.035006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:01.040007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:01.040300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:01.040329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:01.045346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.938 [2024-07-25 05:28:01.045638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.938 [2024-07-25 05:28:01.045669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.938 [2024-07-25 05:28:01.050721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.939 [2024-07-25 05:28:01.051046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.939 [2024-07-25 05:28:01.051085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.939 [2024-07-25 05:28:01.056073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.939 [2024-07-25 05:28:01.056385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.939 [2024-07-25 05:28:01.056416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.939 [2024-07-25 05:28:01.061409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.939 [2024-07-25 05:28:01.061719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.939 [2024-07-25 05:28:01.061750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.939 [2024-07-25 05:28:01.066722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.939 [2024-07-25 05:28:01.067031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.939 [2024-07-25 05:28:01.067069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.939 [2024-07-25 05:28:01.072028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.939 [2024-07-25 05:28:01.072323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.939 [2024-07-25 05:28:01.072353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.939 [2024-07-25 05:28:01.077317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.939 [2024-07-25 05:28:01.077612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.939 [2024-07-25 05:28:01.077644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.939 [2024-07-25 05:28:01.082637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.939 [2024-07-25 05:28:01.082967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.939 [2024-07-25 05:28:01.082996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.939 [2024-07-25 05:28:01.087982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.939 [2024-07-25 05:28:01.088277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.939 [2024-07-25 05:28:01.088308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.939 [2024-07-25 05:28:01.093415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.939 [2024-07-25 05:28:01.093727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.939 [2024-07-25 05:28:01.093756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.939 [2024-07-25 05:28:01.099016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.939 [2024-07-25 05:28:01.099343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.939 [2024-07-25 05:28:01.099373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.939 [2024-07-25 05:28:01.104352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.939 [2024-07-25 05:28:01.104654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.939 [2024-07-25 05:28:01.104685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.939 [2024-07-25 05:28:01.109718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:56.939 [2024-07-25 05:28:01.110030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.939 [2024-07-25 05:28:01.110059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.198 [2024-07-25 05:28:01.115229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.198 [2024-07-25 05:28:01.115534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.115564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.120732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.121054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.121083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.126016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.126308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.126338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.131339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.131633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.131663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.136611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.136906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.136936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.141894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.142219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.142249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.147196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.147489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.147518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.152560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.152867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.152897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.157842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.158171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.158201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.163111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.163416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.163446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.168402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.168696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.168726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.173697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.174001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.174031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.179006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.179327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.179357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.184467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.184774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.184805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.189971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.190265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.190295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.195532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.195833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.195863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.200879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.201186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.201217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.206208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.206508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.206538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.211528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.211820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.211851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.216812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.217127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.217157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.222144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.222436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.222465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.227449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.227741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.227772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.232726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.233034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.233064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.238033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.199 [2024-07-25 05:28:01.238326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.199 [2024-07-25 05:28:01.238356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.199 [2024-07-25 05:28:01.243330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.243623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.243654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.248571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.248866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.248897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.253922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.254238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.254267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.259224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.259525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.259554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.264499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.264792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.264823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.269784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.270089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.270125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.275079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.275376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.275406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.280347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.280638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.280668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.285551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.285844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.285874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.290802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.291120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.291151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.296269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.296562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.296593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.301561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.301854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.301885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.306820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.307141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.307171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.312136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.312432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.312463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.317432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.317733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.317764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.322790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.323107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.323138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.328133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.328441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.328472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.333441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.333736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.333766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.338694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.339016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.339046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.344019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.344326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.344356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.349426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.349722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.349752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.354928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.355270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.355296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.360446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.360742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.360773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.365739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.366049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.366075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.200 [2024-07-25 05:28:01.371157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.200 [2024-07-25 05:28:01.371455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.200 [2024-07-25 05:28:01.371487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.376788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.377103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.377135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.382276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.382573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.382604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.387642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.387974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.388004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.393030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.393324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.393354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.398366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.398661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.398692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.403687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.403997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.404027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.409086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.409384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.409415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.414424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.414715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.414746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.419761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.420075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.420104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.425129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.425429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.425460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.430400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.430692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.430723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.435799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.436114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.436144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.441169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.441465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.441495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.446517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.446819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.446850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.451820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.452128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.452153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.457164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.457458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.457489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.462500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.462796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.462827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.467846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.468166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.468196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.473107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.473403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.473438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.478463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.478756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.478787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.483711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.484017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.484047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.489071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.489362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.489392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.494376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.494669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.494700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.499725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.500045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.500074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.460 [2024-07-25 05:28:01.505011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.460 [2024-07-25 05:28:01.505306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.460 [2024-07-25 05:28:01.505336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.510342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.510636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.510667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.515674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.515979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.516009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.520968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.521261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.521291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.526277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.526569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.526599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.531569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.531862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.531893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.536898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.537206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.537236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.542244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.542553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.542584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.547574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.547863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.547894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.553141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.553454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.553484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.558698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.559025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.559055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.564358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.564669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.564700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.569695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.570022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.570052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.575009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.575315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.575346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.580328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.580624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.580648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.585593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.585886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.585917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.590866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.591182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.591213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.596162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.596453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.596478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.601430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.601742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.601773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.606703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.607011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.607041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.612076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.612385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.612415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.617392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.617684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.617714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.622653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.622946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.622984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.627968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.628285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.628315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.461 [2024-07-25 05:28:01.633453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.461 [2024-07-25 05:28:01.633769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.461 [2024-07-25 05:28:01.633799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.638976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.639284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.639314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.644431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.644730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.644761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.649759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.650069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.650100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.655084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.655387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.655417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.660511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.660808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.660839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.666120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.666434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.666470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.671540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.671834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.671866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.676836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.677146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.677171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.682131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.682423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.682453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.687451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.687759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.687789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.692761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.693066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.693099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.698054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.698348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.698378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.703378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.703687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.703718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.708713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.709034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.709066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.714076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.714384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.714414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.719419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.719712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.719743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.724703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.725009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.725039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.729995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.730288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.721 [2024-07-25 05:28:01.730318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.721 [2024-07-25 05:28:01.735236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.721 [2024-07-25 05:28:01.735534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.735564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.740500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.740790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.740822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.745769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.746078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.746108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.751037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.751345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.751375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.756320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.756612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.756642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.761614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.761906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.761937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.766943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.767263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.767299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.772326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.772621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.772652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.777719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.778048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.778080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.783101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.783396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.783427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.788439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.788733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.788764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.793699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.794008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.794039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.799038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.799342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.799372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.804351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.804651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.804682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.809632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.809925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.809965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.814917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.815243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.815274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.820235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.820542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.820573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.825494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.825785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.825816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.830783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.831099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.831124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.836104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.836413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.836444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.841403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.841699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.841730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.846678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.846985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.847014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.852038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.852353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.852384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.857367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.857659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.857689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.862707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.863029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.863058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.868079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.868373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.722 [2024-07-25 05:28:01.868402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.722 [2024-07-25 05:28:01.873348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.722 [2024-07-25 05:28:01.873641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.723 [2024-07-25 05:28:01.873670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.723 [2024-07-25 05:28:01.878694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.723 [2024-07-25 05:28:01.879023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.723 [2024-07-25 05:28:01.879053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.723 [2024-07-25 05:28:01.884040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.723 [2024-07-25 05:28:01.884335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.723 [2024-07-25 05:28:01.884365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.723 [2024-07-25 05:28:01.889408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.723 [2024-07-25 05:28:01.889699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.723 [2024-07-25 05:28:01.889731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.723 [2024-07-25 05:28:01.894816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.723 [2024-07-25 05:28:01.895157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.723 [2024-07-25 05:28:01.895188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.983 [2024-07-25 05:28:01.900282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.983 [2024-07-25 05:28:01.900578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.983 [2024-07-25 05:28:01.900609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.983 [2024-07-25 05:28:01.905720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.983 [2024-07-25 05:28:01.906029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.983 [2024-07-25 05:28:01.906059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.983 [2024-07-25 05:28:01.911038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.983 [2024-07-25 05:28:01.911343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.983 [2024-07-25 05:28:01.911373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.983 [2024-07-25 05:28:01.916327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.983 [2024-07-25 05:28:01.916621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.983 [2024-07-25 05:28:01.916651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.983 [2024-07-25 05:28:01.921677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.983 [2024-07-25 05:28:01.922001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.983 [2024-07-25 05:28:01.922031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.983 [2024-07-25 05:28:01.926935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.983 [2024-07-25 05:28:01.927252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.983 [2024-07-25 05:28:01.927282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.983 [2024-07-25 05:28:01.932272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.983 [2024-07-25 05:28:01.932567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.983 [2024-07-25 05:28:01.932597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.983 [2024-07-25 05:28:01.937564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.983 [2024-07-25 05:28:01.937863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.983 [2024-07-25 05:28:01.937898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.983 [2024-07-25 05:28:01.942856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.983 [2024-07-25 05:28:01.943171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.983 [2024-07-25 05:28:01.943202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:01.948113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:01.948422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:01.948451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:01.953441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:01.953733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:01.953769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:01.958743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:01.959052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:01.959093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:01.963978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:01.964282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:01.964311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:01.969340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:01.969632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:01.969663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:01.974698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:01.975021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:01.975054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:01.980098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:01.980415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:01.980444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:01.985422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:01.985721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:01.985752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:01.990860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:01.991189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:01.991221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:01.996433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:01.996726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:01.996756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:02.002061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:02.002376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:02.002416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:02.007497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:02.007794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:02.007825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:02.012809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:02.013119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:02.013150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:02.018170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:02.018464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:02.018494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:02.023504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:02.023798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:02.023834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:02.028895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:02.029210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:02.029247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:02.034608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:02.034906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:02.034937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:02.040006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:02.040305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:02.040335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:02.045399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:02.045707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:02.045745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:02.050759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:02.051074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:02.051109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:02.056101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:02.056411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:02.056440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:02.061432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:02.061727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:02.061757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:02.066705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:02.067012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:02.067042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:02.072045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:02.072342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:02.072379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:02.077341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:02.077635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:02.077665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:02.082602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.984 [2024-07-25 05:28:02.082895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.984 [2024-07-25 05:28:02.082926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.984 [2024-07-25 05:28:02.087912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.985 [2024-07-25 05:28:02.088221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.985 [2024-07-25 05:28:02.088253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.985 [2024-07-25 05:28:02.093218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.985 [2024-07-25 05:28:02.093511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.985 [2024-07-25 05:28:02.093549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.985 [2024-07-25 05:28:02.098490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.985 [2024-07-25 05:28:02.098789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.985 [2024-07-25 05:28:02.098820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.985 [2024-07-25 05:28:02.103792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.985 [2024-07-25 05:28:02.104102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.985 [2024-07-25 05:28:02.104132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.985 [2024-07-25 05:28:02.109077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.985 [2024-07-25 05:28:02.109369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.985 [2024-07-25 05:28:02.109408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.985 [2024-07-25 05:28:02.114343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.985 [2024-07-25 05:28:02.114635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.985 [2024-07-25 05:28:02.114667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.985 [2024-07-25 05:28:02.119597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.985 [2024-07-25 05:28:02.119891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.985 [2024-07-25 05:28:02.119921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.985 [2024-07-25 05:28:02.124895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.985 [2024-07-25 05:28:02.125203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.985 [2024-07-25 05:28:02.125231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.985 [2024-07-25 05:28:02.130192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.985 [2024-07-25 05:28:02.130486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.985 [2024-07-25 05:28:02.130512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.985 [2024-07-25 05:28:02.135506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.985 [2024-07-25 05:28:02.135798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.985 [2024-07-25 05:28:02.135833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:57.985 [2024-07-25 05:28:02.140825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.985 [2024-07-25 05:28:02.141138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.985 [2024-07-25 05:28:02.141169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:57.985 [2024-07-25 05:28:02.146106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.985 [2024-07-25 05:28:02.146399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.985 [2024-07-25 05:28:02.146430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.985 [2024-07-25 05:28:02.151504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.985 [2024-07-25 05:28:02.151811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.985 [2024-07-25 05:28:02.151850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.985 [2024-07-25 05:28:02.157134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:57.985 [2024-07-25 05:28:02.157446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.985 [2024-07-25 05:28:02.157481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.249 [2024-07-25 05:28:02.163383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.249 [2024-07-25 05:28:02.163728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.249 [2024-07-25 05:28:02.163760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.249 [2024-07-25 05:28:02.170501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.249 [2024-07-25 05:28:02.170851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.249 [2024-07-25 05:28:02.170886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.249 [2024-07-25 05:28:02.177975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.249 [2024-07-25 05:28:02.178310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.249 [2024-07-25 05:28:02.178346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.249 [2024-07-25 05:28:02.185080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.249 [2024-07-25 05:28:02.185393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.249 [2024-07-25 05:28:02.185425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.249 [2024-07-25 05:28:02.190501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.249 [2024-07-25 05:28:02.190799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.249 [2024-07-25 05:28:02.190831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.249 [2024-07-25 05:28:02.195889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.249 [2024-07-25 05:28:02.196210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.249 [2024-07-25 05:28:02.196241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.201255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.201565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.201596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.206600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.206893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.206924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.211872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.212178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.212208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.217148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.217441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.217471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.222416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.222709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.222739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.227716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.228038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.228068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.233053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.233352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.233382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.238389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.238683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.238713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.243717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.244024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.244054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.249050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.249346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.249376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.254292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.254582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.254612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.259537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.259831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.259861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.264816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.265126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.265156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.270118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.270409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.270439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.275440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.275738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.275770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.280723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.281032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.281064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.286097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.286389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.286418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.291654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.291950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.291994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.296979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.297275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.297306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.302363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.302656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.302687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.307694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.308009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.308039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.312997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.313291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.313321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.318255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.318549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.318579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.323590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.323883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.323912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.328908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.329218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.329249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.334311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.334610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.334641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.250 [2024-07-25 05:28:02.339669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.250 [2024-07-25 05:28:02.339992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.250 [2024-07-25 05:28:02.340022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.251 [2024-07-25 05:28:02.344980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.251 [2024-07-25 05:28:02.345272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.251 [2024-07-25 05:28:02.345302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.251 [2024-07-25 05:28:02.350286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.251 [2024-07-25 05:28:02.350581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.251 [2024-07-25 05:28:02.350612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.251 [2024-07-25 05:28:02.355674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.251 [2024-07-25 05:28:02.355995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.251 [2024-07-25 05:28:02.356024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.251 [2024-07-25 05:28:02.360989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.251 [2024-07-25 05:28:02.361290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.251 [2024-07-25 05:28:02.361325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.251 [2024-07-25 05:28:02.366370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.251 [2024-07-25 05:28:02.366663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.251 [2024-07-25 05:28:02.366692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.251 [2024-07-25 05:28:02.371708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.251 [2024-07-25 05:28:02.372052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.251 [2024-07-25 05:28:02.372078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.251 [2024-07-25 05:28:02.377032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.251 [2024-07-25 05:28:02.377323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.251 [2024-07-25 05:28:02.377354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.251 [2024-07-25 05:28:02.382314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.251 [2024-07-25 05:28:02.382604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.251 [2024-07-25 05:28:02.382633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.251 [2024-07-25 05:28:02.387584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.251 [2024-07-25 05:28:02.387877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.251 [2024-07-25 05:28:02.387908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.251 [2024-07-25 05:28:02.392878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.251 [2024-07-25 05:28:02.393187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.251 [2024-07-25 05:28:02.393217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.251 [2024-07-25 05:28:02.398218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.251 [2024-07-25 05:28:02.398512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.251 [2024-07-25 05:28:02.398543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.251 [2024-07-25 05:28:02.403520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.251 [2024-07-25 05:28:02.403814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.251 [2024-07-25 05:28:02.403844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.251 [2024-07-25 05:28:02.408814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.251 [2024-07-25 05:28:02.409136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.251 [2024-07-25 05:28:02.409166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.251 [2024-07-25 05:28:02.414119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.251 [2024-07-25 05:28:02.414418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.251 [2024-07-25 05:28:02.414448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.251 [2024-07-25 05:28:02.419440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.251 [2024-07-25 05:28:02.419733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.251 [2024-07-25 05:28:02.419765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.510 [2024-07-25 05:28:02.424979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.510 [2024-07-25 05:28:02.425282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.510 [2024-07-25 05:28:02.425306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.510 [2024-07-25 05:28:02.430578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.510 [2024-07-25 05:28:02.430892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.510 [2024-07-25 05:28:02.430922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.510 [2024-07-25 05:28:02.435888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.510 [2024-07-25 05:28:02.436196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.510 [2024-07-25 05:28:02.436226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.510 [2024-07-25 05:28:02.441237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.510 [2024-07-25 05:28:02.441542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.510 [2024-07-25 05:28:02.441574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.510 [2024-07-25 05:28:02.446566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.510 [2024-07-25 05:28:02.446861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.510 [2024-07-25 05:28:02.446892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.510 [2024-07-25 05:28:02.451859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.510 [2024-07-25 05:28:02.452165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.510 [2024-07-25 05:28:02.452194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.510 [2024-07-25 05:28:02.457231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.510 [2024-07-25 05:28:02.457533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.510 [2024-07-25 05:28:02.457573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.510 [2024-07-25 05:28:02.462573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.510 [2024-07-25 05:28:02.462864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.510 [2024-07-25 05:28:02.462907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.510 [2024-07-25 05:28:02.467906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.510 [2024-07-25 05:28:02.468230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.510 [2024-07-25 05:28:02.468259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.473264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.473561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.473591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.478625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.478938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.478987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.483924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.484238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.484268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.490098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.490402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.490432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.495423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.495719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.495749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.500802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.501122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.501152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.506151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.506444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.506474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.511482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.511789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.511819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.516841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.517168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.517198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.522320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.522634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.522664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.527920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.528244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.528280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.533413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.533706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.533736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.538678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.538989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.539018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.543971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.544261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.544291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.549271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.549564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.549595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.554563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.554862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.554893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.559882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.560187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.560212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.565178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.565471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.565502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.570477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.570770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.570801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.575780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.576089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.576119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.581098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.581391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.581420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.586389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.586680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.586716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.591734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.592063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.592093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.597345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.597638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.597668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.602754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.603078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.603108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.608046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.511 [2024-07-25 05:28:02.608356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.511 [2024-07-25 05:28:02.608386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.511 [2024-07-25 05:28:02.613391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.512 [2024-07-25 05:28:02.613684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.512 [2024-07-25 05:28:02.613715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.512 [2024-07-25 05:28:02.618672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.512 [2024-07-25 05:28:02.618977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.512 [2024-07-25 05:28:02.619007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.512 [2024-07-25 05:28:02.623996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.512 [2024-07-25 05:28:02.624287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.512 [2024-07-25 05:28:02.624317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.512 [2024-07-25 05:28:02.629321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.512 [2024-07-25 05:28:02.629615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.512 [2024-07-25 05:28:02.629645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.512 [2024-07-25 05:28:02.634611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.512 [2024-07-25 05:28:02.634905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.512 [2024-07-25 05:28:02.634935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.512 [2024-07-25 05:28:02.639917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.512 [2024-07-25 05:28:02.640232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.512 [2024-07-25 05:28:02.640263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.512 [2024-07-25 05:28:02.645268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.512 [2024-07-25 05:28:02.645560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.512 [2024-07-25 05:28:02.645589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.512 [2024-07-25 05:28:02.650668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.512 [2024-07-25 05:28:02.650998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.512 [2024-07-25 05:28:02.651028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.512 [2024-07-25 05:28:02.656111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.512 [2024-07-25 05:28:02.656421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.512 [2024-07-25 05:28:02.656452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.512 [2024-07-25 05:28:02.661444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.512 [2024-07-25 05:28:02.661738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.512 [2024-07-25 05:28:02.661769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.512 [2024-07-25 05:28:02.666747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.512 [2024-07-25 05:28:02.667061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.512 [2024-07-25 05:28:02.667100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.512 [2024-07-25 05:28:02.672096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.512 [2024-07-25 05:28:02.672411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.512 [2024-07-25 05:28:02.672443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.512 [2024-07-25 05:28:02.677418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.512 [2024-07-25 05:28:02.677710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.512 [2024-07-25 05:28:02.677740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.512 [2024-07-25 05:28:02.682843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.512 [2024-07-25 05:28:02.683179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.512 [2024-07-25 05:28:02.683208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.688420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.688720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.688750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.693891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.694205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.694235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.699191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.699486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.699516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.704514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.704805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.704835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.709789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.710095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.710126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.715126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.715421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.715452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.720402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.720694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.720724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.725712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.726018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.726049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.731049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.731350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.731380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.736386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.736697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.736727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.741719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.742042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.742072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.747092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.747394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.747424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.752411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.752708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.752738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.757730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.758040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.758070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.763131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.763424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.763455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.768433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.768728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.768758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.773780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.774090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.774120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.779107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.779401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.779431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.784646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.784977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.785007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.790281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.790573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.790604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.795614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.795907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.795938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.801006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.801326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.801363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.806429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.806747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.771 [2024-07-25 05:28:02.806783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.771 [2024-07-25 05:28:02.811871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc5660) with pdu=0x2000190fef90 00:21:58.771 [2024-07-25 05:28:02.812200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.772 [2024-07-25 05:28:02.812230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.772 00:21:58.772 Latency(us) 00:21:58.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.772 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:58.772 nvme0n1 : 2.00 5751.15 718.89 0.00 0.00 2775.89 2472.49 11677.32 00:21:58.772 =================================================================================================================== 00:21:58.772 Total : 5751.15 718.89 0.00 0.00 2775.89 2472.49 11677.32 00:21:58.772 0 00:21:58.772 05:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:58.772 05:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:58.772 05:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:58.772 05:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:58.772 | .driver_specific 00:21:58.772 | .nvme_error 00:21:58.772 | .status_code 00:21:58.772 | .command_transient_transport_error' 00:21:59.030 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 371 > 0 )) 00:21:59.030 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 92990 00:21:59.030 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 92990 ']' 00:21:59.030 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 92990 00:21:59.030 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:59.030 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:59.030 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92990 00:21:59.030 killing process with pid 92990 00:21:59.030 Received shutdown signal, test time was about 2.000000 seconds 00:21:59.030 00:21:59.030 Latency(us) 00:21:59.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.031 =================================================================================================================== 00:21:59.031 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:59.031 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:59.031 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:59.031 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92990' 00:21:59.031 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 92990 00:21:59.031 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 92990 00:21:59.289 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 92694 00:21:59.289 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 92694 ']' 00:21:59.289 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 92694 00:21:59.289 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:59.289 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:59.289 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92694 00:21:59.289 killing process with pid 92694 00:21:59.289 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:59.289 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:59.289 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92694' 00:21:59.289 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 92694 00:21:59.289 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 92694 00:21:59.548 00:21:59.548 real 0m17.964s 00:21:59.548 user 0m34.820s 00:21:59.548 sys 0m4.391s 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:59.548 ************************************ 00:21:59.548 END TEST nvmf_digest_error 00:21:59.548 ************************************ 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:59.548 rmmod nvme_tcp 00:21:59.548 rmmod nvme_fabrics 00:21:59.548 rmmod nvme_keyring 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 92694 ']' 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 92694 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 92694 ']' 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 92694 00:21:59.548 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (92694) - No such process 00:21:59.548 Process with pid 92694 is not found 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 92694 is not found' 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:59.548 ************************************ 00:21:59.548 END TEST nvmf_digest 00:21:59.548 ************************************ 00:21:59.548 00:21:59.548 real 0m35.727s 00:21:59.548 user 1m7.931s 00:21:59.548 sys 0m8.853s 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.548 ************************************ 00:21:59.548 START TEST nvmf_mdns_discovery 00:21:59.548 ************************************ 00:21:59.548 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:21:59.807 * Looking for test storage... 00:21:59.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:59.807 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:59.808 Cannot find device "nvmf_tgt_br" 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:59.808 Cannot find device "nvmf_tgt_br2" 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:59.808 Cannot find device "nvmf_tgt_br" 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:59.808 Cannot find device "nvmf_tgt_br2" 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.808 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.808 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:59.808 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:00.066 05:28:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:00.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:22:00.066 00:22:00.066 --- 10.0.0.2 ping statistics --- 00:22:00.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.066 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:00.066 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:00.066 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:22:00.066 00:22:00.066 --- 10.0.0.3 ping statistics --- 00:22:00.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.066 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:00.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:22:00.066 00:22:00.066 --- 10.0.0.1 ping statistics --- 00:22:00.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.066 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=93285 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 93285 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # '[' -z 93285 ']' 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.066 05:28:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:00.324 [2024-07-25 05:28:04.242053] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:22:00.324 [2024-07-25 05:28:04.242155] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.324 [2024-07-25 05:28:04.378175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.324 [2024-07-25 05:28:04.448078] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.324 [2024-07-25 05:28:04.448151] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.324 [2024-07-25 05:28:04.448165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.324 [2024-07-25 05:28:04.448175] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.324 [2024-07-25 05:28:04.448184] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.324 [2024-07-25 05:28:04.448222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # return 0 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.258 [2024-07-25 05:28:05.345961] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.258 [2024-07-25 05:28:05.354075] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.258 null0 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.258 null1 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.258 null2 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.258 null3 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=93335 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 93335 /tmp/host.sock 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # '[' -z 93335 ']' 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.258 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.258 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.516 [2024-07-25 05:28:05.447700] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:22:01.516 [2024-07-25 05:28:05.447786] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93335 ] 00:22:01.516 [2024-07-25 05:28:05.581351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.516 [2024-07-25 05:28:05.649905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.774 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.774 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # return 0 00:22:01.774 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:22:01.774 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:22:01.774 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:22:01.774 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=93351 00:22:01.774 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:22:01.774 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:22:01.774 05:28:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:22:01.774 Process 980 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:22:01.774 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:22:01.774 Successfully dropped root privileges. 00:22:01.774 avahi-daemon 0.8 starting up. 00:22:01.774 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:22:01.774 Successfully called chroot(). 00:22:01.774 Successfully dropped remaining capabilities. 00:22:01.774 No service file found in /etc/avahi/services. 00:22:02.709 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:22:02.709 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:22:02.709 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:22:02.709 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:22:02.709 Network interface enumeration completed. 00:22:02.709 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:22:02.709 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:22:02.709 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:22:02.710 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:22:02.710 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 2865282044. 00:22:02.710 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:02.710 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.710 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.710 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.710 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:02.710 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.710 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.710 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.710 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:22:02.710 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:22:02.710 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:02.710 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:22:02.710 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:22:02.710 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:22:02.710 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.710 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.710 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:22:02.968 05:28:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:22:02.968 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.227 [2024-07-25 05:28:07.157579] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.227 [2024-07-25 05:28:07.234600] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:03.227 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.228 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.228 [2024-07-25 05:28:07.274600] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:03.228 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.228 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:03.228 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.228 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.228 [2024-07-25 05:28:07.282518] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:03.228 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.228 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:22:03.228 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.228 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.228 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.228 05:28:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:22:04.162 [2024-07-25 05:28:08.057572] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:04.728 [2024-07-25 05:28:08.657631] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:22:04.728 [2024-07-25 05:28:08.657687] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:22:04.728 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:22:04.728 cookie is 0 00:22:04.728 is_local: 1 00:22:04.728 our_own: 0 00:22:04.728 wide_area: 0 00:22:04.728 multicast: 1 00:22:04.728 cached: 1 00:22:04.728 [2024-07-25 05:28:08.757594] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:22:04.728 [2024-07-25 05:28:08.757635] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:22:04.728 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:22:04.728 cookie is 0 00:22:04.728 is_local: 1 00:22:04.728 our_own: 0 00:22:04.728 wide_area: 0 00:22:04.728 multicast: 1 00:22:04.728 cached: 1 00:22:04.728 [2024-07-25 05:28:08.757651] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:22:04.728 [2024-07-25 05:28:08.857592] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:22:04.728 [2024-07-25 05:28:08.857630] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:22:04.728 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:22:04.728 cookie is 0 00:22:04.728 is_local: 1 00:22:04.728 our_own: 0 00:22:04.728 wide_area: 0 00:22:04.728 multicast: 1 00:22:04.728 cached: 1 00:22:04.986 [2024-07-25 05:28:08.957592] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:22:04.986 [2024-07-25 05:28:08.957640] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:22:04.986 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:22:04.986 cookie is 0 00:22:04.986 is_local: 1 00:22:04.986 our_own: 0 00:22:04.986 wide_area: 0 00:22:04.986 multicast: 1 00:22:04.986 cached: 1 00:22:04.986 [2024-07-25 05:28:08.957657] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:22:05.551 [2024-07-25 05:28:09.670388] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:05.551 [2024-07-25 05:28:09.670437] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:05.551 [2024-07-25 05:28:09.670460] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:05.809 [2024-07-25 05:28:09.756530] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:22:05.809 [2024-07-25 05:28:09.813505] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:05.809 [2024-07-25 05:28:09.813539] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:05.809 [2024-07-25 05:28:09.870076] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:05.809 [2024-07-25 05:28:09.870109] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:05.809 [2024-07-25 05:28:09.870130] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:05.809 [2024-07-25 05:28:09.956227] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:22:06.067 [2024-07-25 05:28:10.012898] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:06.067 [2024-07-25 05:28:10.012945] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.636 05:28:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:22:09.571 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:22:09.571 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:22:09.571 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.571 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.571 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.571 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:22:09.571 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.828 [2024-07-25 05:28:13.817777] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:09.828 [2024-07-25 05:28:13.818553] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:09.828 [2024-07-25 05:28:13.818596] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:09.828 [2024-07-25 05:28:13.818636] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:09.828 [2024-07-25 05:28:13.818651] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.828 [2024-07-25 05:28:13.825709] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:09.828 [2024-07-25 05:28:13.826546] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:09.828 [2024-07-25 05:28:13.826641] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.828 05:28:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:22:09.828 [2024-07-25 05:28:13.956659] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:22:09.828 [2024-07-25 05:28:13.957644] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:22:10.086 [2024-07-25 05:28:14.014991] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:10.086 [2024-07-25 05:28:14.015023] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:10.086 [2024-07-25 05:28:14.015031] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:10.086 [2024-07-25 05:28:14.015052] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:10.086 [2024-07-25 05:28:14.015863] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:10.086 [2024-07-25 05:28:14.015886] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:10.086 [2024-07-25 05:28:14.015893] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:10.086 [2024-07-25 05:28:14.015911] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:10.087 [2024-07-25 05:28:14.060771] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:10.087 [2024-07-25 05:28:14.060799] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:10.087 [2024-07-25 05:28:14.061759] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:10.087 [2024-07-25 05:28:14.061780] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.046 05:28:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.046 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.046 [2024-07-25 05:28:15.123430] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:11.046 [2024-07-25 05:28:15.123473] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:11.046 [2024-07-25 05:28:15.123514] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:11.046 [2024-07-25 05:28:15.123530] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:11.046 [2024-07-25 05:28:15.123617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.047 [2024-07-25 05:28:15.123651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.047 [2024-07-25 05:28:15.123665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.047 [2024-07-25 05:28:15.123675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.047 [2024-07-25 05:28:15.123685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.047 [2024-07-25 05:28:15.123695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.047 [2024-07-25 05:28:15.123705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.047 [2024-07-25 05:28:15.123714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.047 [2024-07-25 05:28:15.123724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22156b0 is same with the state(5) to be set 00:22:11.047 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.047 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:11.047 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.047 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.047 [2024-07-25 05:28:15.131439] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:11.047 [2024-07-25 05:28:15.131507] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:11.047 [2024-07-25 05:28:15.133240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.047 [2024-07-25 05:28:15.133276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.047 [2024-07-25 05:28:15.133290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.047 [2024-07-25 05:28:15.133299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.047 [2024-07-25 05:28:15.133310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.047 [2024-07-25 05:28:15.133319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.047 [2024-07-25 05:28:15.133329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.047 [2024-07-25 05:28:15.133338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.047 [2024-07-25 05:28:15.133347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215380 is same with the state(5) to be set 00:22:11.047 [2024-07-25 05:28:15.133566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22156b0 (9): Bad file descriptor 00:22:11.047 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.047 05:28:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:22:11.047 [2024-07-25 05:28:15.143208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2215380 (9): Bad file descriptor 00:22:11.047 [2024-07-25 05:28:15.143587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:11.047 [2024-07-25 05:28:15.143705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.047 [2024-07-25 05:28:15.143736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22156b0 with addr=10.0.0.2, port=4420 00:22:11.047 [2024-07-25 05:28:15.143749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22156b0 is same with the state(5) to be set 00:22:11.047 [2024-07-25 05:28:15.143767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22156b0 (9): Bad file descriptor 00:22:11.047 [2024-07-25 05:28:15.143783] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:11.047 [2024-07-25 05:28:15.143792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:11.047 [2024-07-25 05:28:15.143803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:11.047 [2024-07-25 05:28:15.143819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.047 [2024-07-25 05:28:15.153218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:11.047 [2024-07-25 05:28:15.153316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.047 [2024-07-25 05:28:15.153339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2215380 with addr=10.0.0.3, port=4420 00:22:11.047 [2024-07-25 05:28:15.153350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215380 is same with the state(5) to be set 00:22:11.047 [2024-07-25 05:28:15.153366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2215380 (9): Bad file descriptor 00:22:11.047 [2024-07-25 05:28:15.153381] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:11.047 [2024-07-25 05:28:15.153391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:11.047 [2024-07-25 05:28:15.153401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:11.047 [2024-07-25 05:28:15.153416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.047 [2024-07-25 05:28:15.153655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:11.047 [2024-07-25 05:28:15.153733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.047 [2024-07-25 05:28:15.153754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22156b0 with addr=10.0.0.2, port=4420 00:22:11.047 [2024-07-25 05:28:15.153765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22156b0 is same with the state(5) to be set 00:22:11.047 [2024-07-25 05:28:15.153782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22156b0 (9): Bad file descriptor 00:22:11.047 [2024-07-25 05:28:15.153796] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:11.047 [2024-07-25 05:28:15.153804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:11.047 [2024-07-25 05:28:15.153813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:11.047 [2024-07-25 05:28:15.153837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.047 [2024-07-25 05:28:15.163276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:11.047 [2024-07-25 05:28:15.163366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.047 [2024-07-25 05:28:15.163389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2215380 with addr=10.0.0.3, port=4420 00:22:11.047 [2024-07-25 05:28:15.163400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215380 is same with the state(5) to be set 00:22:11.047 [2024-07-25 05:28:15.163416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2215380 (9): Bad file descriptor 00:22:11.047 [2024-07-25 05:28:15.163431] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:11.047 [2024-07-25 05:28:15.163440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:11.047 [2024-07-25 05:28:15.163449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:11.047 [2024-07-25 05:28:15.163484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.047 [2024-07-25 05:28:15.163703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:11.047 [2024-07-25 05:28:15.163779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.047 [2024-07-25 05:28:15.163800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22156b0 with addr=10.0.0.2, port=4420 00:22:11.047 [2024-07-25 05:28:15.163811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22156b0 is same with the state(5) to be set 00:22:11.047 [2024-07-25 05:28:15.163827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22156b0 (9): Bad file descriptor 00:22:11.047 [2024-07-25 05:28:15.163841] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:11.047 [2024-07-25 05:28:15.163850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:11.047 [2024-07-25 05:28:15.163859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:11.047 [2024-07-25 05:28:15.163874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.047 [2024-07-25 05:28:15.173336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:11.047 [2024-07-25 05:28:15.173427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.047 [2024-07-25 05:28:15.173449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2215380 with addr=10.0.0.3, port=4420 00:22:11.047 [2024-07-25 05:28:15.173460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215380 is same with the state(5) to be set 00:22:11.047 [2024-07-25 05:28:15.173479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2215380 (9): Bad file descriptor 00:22:11.047 [2024-07-25 05:28:15.173514] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:11.047 [2024-07-25 05:28:15.173526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:11.047 [2024-07-25 05:28:15.173535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:11.047 [2024-07-25 05:28:15.173550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.047 [2024-07-25 05:28:15.173749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:11.047 [2024-07-25 05:28:15.173811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.047 [2024-07-25 05:28:15.173832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22156b0 with addr=10.0.0.2, port=4420 00:22:11.047 [2024-07-25 05:28:15.173842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22156b0 is same with the state(5) to be set 00:22:11.047 [2024-07-25 05:28:15.173859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22156b0 (9): Bad file descriptor 00:22:11.047 [2024-07-25 05:28:15.173873] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:11.047 [2024-07-25 05:28:15.173882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:11.047 [2024-07-25 05:28:15.173892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:11.048 [2024-07-25 05:28:15.173906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.048 [2024-07-25 05:28:15.183396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:11.048 [2024-07-25 05:28:15.183493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.048 [2024-07-25 05:28:15.183515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2215380 with addr=10.0.0.3, port=4420 00:22:11.048 [2024-07-25 05:28:15.183527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215380 is same with the state(5) to be set 00:22:11.048 [2024-07-25 05:28:15.183543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2215380 (9): Bad file descriptor 00:22:11.048 [2024-07-25 05:28:15.183576] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:11.048 [2024-07-25 05:28:15.183587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:11.048 [2024-07-25 05:28:15.183597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:11.048 [2024-07-25 05:28:15.183612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.048 [2024-07-25 05:28:15.183783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:11.048 [2024-07-25 05:28:15.183844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.048 [2024-07-25 05:28:15.183875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22156b0 with addr=10.0.0.2, port=4420 00:22:11.048 [2024-07-25 05:28:15.183887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22156b0 is same with the state(5) to be set 00:22:11.048 [2024-07-25 05:28:15.183904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22156b0 (9): Bad file descriptor 00:22:11.048 [2024-07-25 05:28:15.183918] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:11.048 [2024-07-25 05:28:15.183928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:11.048 [2024-07-25 05:28:15.183937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:11.048 [2024-07-25 05:28:15.183963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.048 [2024-07-25 05:28:15.193457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:11.048 [2024-07-25 05:28:15.193554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.048 [2024-07-25 05:28:15.193576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2215380 with addr=10.0.0.3, port=4420 00:22:11.048 [2024-07-25 05:28:15.193589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215380 is same with the state(5) to be set 00:22:11.048 [2024-07-25 05:28:15.193606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2215380 (9): Bad file descriptor 00:22:11.048 [2024-07-25 05:28:15.193641] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:11.048 [2024-07-25 05:28:15.193652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:11.048 [2024-07-25 05:28:15.193662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:11.048 [2024-07-25 05:28:15.193677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.048 [2024-07-25 05:28:15.193817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:11.048 [2024-07-25 05:28:15.193879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.048 [2024-07-25 05:28:15.193899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22156b0 with addr=10.0.0.2, port=4420 00:22:11.048 [2024-07-25 05:28:15.193910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22156b0 is same with the state(5) to be set 00:22:11.048 [2024-07-25 05:28:15.193927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22156b0 (9): Bad file descriptor 00:22:11.048 [2024-07-25 05:28:15.193941] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:11.048 [2024-07-25 05:28:15.193966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:11.048 [2024-07-25 05:28:15.193979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:11.048 [2024-07-25 05:28:15.193995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.048 [2024-07-25 05:28:15.203519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:11.048 [2024-07-25 05:28:15.203613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.048 [2024-07-25 05:28:15.203635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2215380 with addr=10.0.0.3, port=4420 00:22:11.048 [2024-07-25 05:28:15.203646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215380 is same with the state(5) to be set 00:22:11.048 [2024-07-25 05:28:15.203663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2215380 (9): Bad file descriptor 00:22:11.048 [2024-07-25 05:28:15.203698] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:11.048 [2024-07-25 05:28:15.203710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:11.048 [2024-07-25 05:28:15.203720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:11.048 [2024-07-25 05:28:15.203734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.048 [2024-07-25 05:28:15.203850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:11.048 [2024-07-25 05:28:15.203912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.048 [2024-07-25 05:28:15.203932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22156b0 with addr=10.0.0.2, port=4420 00:22:11.048 [2024-07-25 05:28:15.203943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22156b0 is same with the state(5) to be set 00:22:11.048 [2024-07-25 05:28:15.203972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22156b0 (9): Bad file descriptor 00:22:11.048 [2024-07-25 05:28:15.203989] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:11.048 [2024-07-25 05:28:15.203998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:11.048 [2024-07-25 05:28:15.204007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:11.048 [2024-07-25 05:28:15.204022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.048 [2024-07-25 05:28:15.213588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:11.048 [2024-07-25 05:28:15.213678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.048 [2024-07-25 05:28:15.213700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2215380 with addr=10.0.0.3, port=4420 00:22:11.048 [2024-07-25 05:28:15.213712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215380 is same with the state(5) to be set 00:22:11.048 [2024-07-25 05:28:15.213728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2215380 (9): Bad file descriptor 00:22:11.048 [2024-07-25 05:28:15.213771] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:11.048 [2024-07-25 05:28:15.213784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:11.048 [2024-07-25 05:28:15.213793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:11.048 [2024-07-25 05:28:15.213808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.048 [2024-07-25 05:28:15.213884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:11.048 [2024-07-25 05:28:15.213945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.048 [2024-07-25 05:28:15.213981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22156b0 with addr=10.0.0.2, port=4420 00:22:11.048 [2024-07-25 05:28:15.213993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22156b0 is same with the state(5) to be set 00:22:11.048 [2024-07-25 05:28:15.214009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22156b0 (9): Bad file descriptor 00:22:11.048 [2024-07-25 05:28:15.214025] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:11.048 [2024-07-25 05:28:15.214034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:11.048 [2024-07-25 05:28:15.214043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:11.048 [2024-07-25 05:28:15.214058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.307 [2024-07-25 05:28:15.223649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:11.307 [2024-07-25 05:28:15.223744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.307 [2024-07-25 05:28:15.223767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2215380 with addr=10.0.0.3, port=4420 00:22:11.307 [2024-07-25 05:28:15.223778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215380 is same with the state(5) to be set 00:22:11.307 [2024-07-25 05:28:15.223795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2215380 (9): Bad file descriptor 00:22:11.307 [2024-07-25 05:28:15.223828] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:11.307 [2024-07-25 05:28:15.223839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:11.307 [2024-07-25 05:28:15.223849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:11.307 [2024-07-25 05:28:15.223864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.307 [2024-07-25 05:28:15.223916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:11.307 [2024-07-25 05:28:15.223991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.307 [2024-07-25 05:28:15.224012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22156b0 with addr=10.0.0.2, port=4420 00:22:11.307 [2024-07-25 05:28:15.224023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22156b0 is same with the state(5) to be set 00:22:11.307 [2024-07-25 05:28:15.224040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22156b0 (9): Bad file descriptor 00:22:11.307 [2024-07-25 05:28:15.224055] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:11.307 [2024-07-25 05:28:15.224064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:11.307 [2024-07-25 05:28:15.224073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:11.307 [2024-07-25 05:28:15.224088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.307 [2024-07-25 05:28:15.233709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:11.307 [2024-07-25 05:28:15.233801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.307 [2024-07-25 05:28:15.233823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2215380 with addr=10.0.0.3, port=4420 00:22:11.307 [2024-07-25 05:28:15.233835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215380 is same with the state(5) to be set 00:22:11.307 [2024-07-25 05:28:15.233860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2215380 (9): Bad file descriptor 00:22:11.307 [2024-07-25 05:28:15.233896] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:11.307 [2024-07-25 05:28:15.233908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:11.307 [2024-07-25 05:28:15.233918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:11.307 [2024-07-25 05:28:15.233933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.307 [2024-07-25 05:28:15.233975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:11.308 [2024-07-25 05:28:15.234042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.308 [2024-07-25 05:28:15.234061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22156b0 with addr=10.0.0.2, port=4420 00:22:11.308 [2024-07-25 05:28:15.234072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22156b0 is same with the state(5) to be set 00:22:11.308 [2024-07-25 05:28:15.234088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22156b0 (9): Bad file descriptor 00:22:11.308 [2024-07-25 05:28:15.234103] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:11.308 [2024-07-25 05:28:15.234112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:11.308 [2024-07-25 05:28:15.234121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:11.308 [2024-07-25 05:28:15.234136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.308 [2024-07-25 05:28:15.243768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:11.308 [2024-07-25 05:28:15.243859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.308 [2024-07-25 05:28:15.243882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2215380 with addr=10.0.0.3, port=4420 00:22:11.308 [2024-07-25 05:28:15.243894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215380 is same with the state(5) to be set 00:22:11.308 [2024-07-25 05:28:15.243911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2215380 (9): Bad file descriptor 00:22:11.308 [2024-07-25 05:28:15.243943] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:11.308 [2024-07-25 05:28:15.243967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:11.308 [2024-07-25 05:28:15.243979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:11.308 [2024-07-25 05:28:15.243995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.308 [2024-07-25 05:28:15.244022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:11.308 [2024-07-25 05:28:15.244083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.308 [2024-07-25 05:28:15.244104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22156b0 with addr=10.0.0.2, port=4420 00:22:11.308 [2024-07-25 05:28:15.244115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22156b0 is same with the state(5) to be set 00:22:11.308 [2024-07-25 05:28:15.244130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22156b0 (9): Bad file descriptor 00:22:11.308 [2024-07-25 05:28:15.244145] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:11.308 [2024-07-25 05:28:15.244154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:11.308 [2024-07-25 05:28:15.244163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:11.308 [2024-07-25 05:28:15.244178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.308 [2024-07-25 05:28:15.253825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:11.308 [2024-07-25 05:28:15.253912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.308 [2024-07-25 05:28:15.253933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2215380 with addr=10.0.0.3, port=4420 00:22:11.308 [2024-07-25 05:28:15.253946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215380 is same with the state(5) to be set 00:22:11.308 [2024-07-25 05:28:15.253978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2215380 (9): Bad file descriptor 00:22:11.308 [2024-07-25 05:28:15.254014] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:11.308 [2024-07-25 05:28:15.254026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:11.308 [2024-07-25 05:28:15.254036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:11.308 [2024-07-25 05:28:15.254051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.308 [2024-07-25 05:28:15.254075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:11.308 [2024-07-25 05:28:15.254137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.308 [2024-07-25 05:28:15.254158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22156b0 with addr=10.0.0.2, port=4420 00:22:11.308 [2024-07-25 05:28:15.254168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22156b0 is same with the state(5) to be set 00:22:11.308 [2024-07-25 05:28:15.254184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22156b0 (9): Bad file descriptor 00:22:11.308 [2024-07-25 05:28:15.254199] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:11.308 [2024-07-25 05:28:15.254208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:11.308 [2024-07-25 05:28:15.254218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:11.308 [2024-07-25 05:28:15.254232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.308 [2024-07-25 05:28:15.263012] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:22:11.308 [2024-07-25 05:28:15.263046] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:11.308 [2024-07-25 05:28:15.263069] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:11.308 [2024-07-25 05:28:15.263117] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:11.308 [2024-07-25 05:28:15.263134] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:11.308 [2024-07-25 05:28:15.263150] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:11.308 [2024-07-25 05:28:15.349114] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:11.308 [2024-07-25 05:28:15.349186] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.242 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.500 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:22:12.500 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:22:12.500 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:22:12.500 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:22:12.500 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.500 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.500 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.500 05:28:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:22:12.500 [2024-07-25 05:28:16.457603] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.434 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.693 [2024-07-25 05:28:17.682594] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:22:13.693 2024/07/25 05:28:17 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:22:13.693 request: 00:22:13.693 { 00:22:13.693 "method": "bdev_nvme_start_mdns_discovery", 00:22:13.693 "params": { 00:22:13.693 "name": "mdns", 00:22:13.693 "svcname": "_nvme-disc._http", 00:22:13.693 "hostnqn": "nqn.2021-12.io.spdk:test" 00:22:13.693 } 00:22:13.693 } 00:22:13.693 Got JSON-RPC error response 00:22:13.693 GoRPCClient: error on JSON-RPC call 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:13.693 05:28:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:22:14.259 [2024-07-25 05:28:18.271187] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:14.259 [2024-07-25 05:28:18.371183] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:14.529 [2024-07-25 05:28:18.471192] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:22:14.529 [2024-07-25 05:28:18.471226] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:22:14.529 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:22:14.529 cookie is 0 00:22:14.529 is_local: 1 00:22:14.529 our_own: 0 00:22:14.529 wide_area: 0 00:22:14.529 multicast: 1 00:22:14.529 cached: 1 00:22:14.529 [2024-07-25 05:28:18.571192] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:22:14.529 [2024-07-25 05:28:18.571229] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:22:14.529 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:22:14.529 cookie is 0 00:22:14.529 is_local: 1 00:22:14.529 our_own: 0 00:22:14.529 wide_area: 0 00:22:14.529 multicast: 1 00:22:14.529 cached: 1 00:22:14.530 [2024-07-25 05:28:18.571243] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:22:14.530 [2024-07-25 05:28:18.671207] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:22:14.530 [2024-07-25 05:28:18.671253] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:22:14.530 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:22:14.530 cookie is 0 00:22:14.530 is_local: 1 00:22:14.530 our_own: 0 00:22:14.530 wide_area: 0 00:22:14.530 multicast: 1 00:22:14.530 cached: 1 00:22:14.806 [2024-07-25 05:28:18.771198] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:22:14.806 [2024-07-25 05:28:18.771239] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:22:14.806 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:22:14.806 cookie is 0 00:22:14.806 is_local: 1 00:22:14.806 our_own: 0 00:22:14.806 wide_area: 0 00:22:14.806 multicast: 1 00:22:14.806 cached: 1 00:22:14.806 [2024-07-25 05:28:18.771256] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:22:15.371 [2024-07-25 05:28:19.482395] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:15.371 [2024-07-25 05:28:19.482441] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:15.371 [2024-07-25 05:28:19.482462] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:15.629 [2024-07-25 05:28:19.568512] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:22:15.629 [2024-07-25 05:28:19.628760] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:15.629 [2024-07-25 05:28:19.628797] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:15.629 [2024-07-25 05:28:19.682283] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:15.629 [2024-07-25 05:28:19.682319] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:15.629 [2024-07-25 05:28:19.682340] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:15.629 [2024-07-25 05:28:19.768427] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:22:15.887 [2024-07-25 05:28:19.828807] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:15.887 [2024-07-25 05:28:19.828855] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.167 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.168 [2024-07-25 05:28:22.879367] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:22:19.168 2024/07/25 05:28:22 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:22:19.168 request: 00:22:19.168 { 00:22:19.168 "method": "bdev_nvme_start_mdns_discovery", 00:22:19.168 "params": { 00:22:19.168 "name": "cdc", 00:22:19.168 "svcname": "_nvme-disc._tcp", 00:22:19.168 "hostnqn": "nqn.2021-12.io.spdk:test" 00:22:19.168 } 00:22:19.168 } 00:22:19.168 Got JSON-RPC error response 00:22:19.168 GoRPCClient: error on JSON-RPC call 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:22:19.168 05:28:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 93335 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 93335 00:22:19.168 [2024-07-25 05:28:23.082484] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 93351 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:22:19.168 Got SIGTERM, quitting. 00:22:19.168 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:22:19.168 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:22:19.168 avahi-daemon 0.8 exiting. 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:19.168 rmmod nvme_tcp 00:22:19.168 rmmod nvme_fabrics 00:22:19.168 rmmod nvme_keyring 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 93285 ']' 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 93285 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@950 -- # '[' -z 93285 ']' 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # kill -0 93285 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@955 -- # uname 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93285 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:19.168 killing process with pid 93285 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93285' 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@969 -- # kill 93285 00:22:19.168 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@974 -- # wait 93285 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:19.426 00:22:19.426 real 0m19.750s 00:22:19.426 user 0m38.812s 00:22:19.426 sys 0m1.806s 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:19.426 ************************************ 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.426 END TEST nvmf_mdns_discovery 00:22:19.426 ************************************ 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.426 ************************************ 00:22:19.426 START TEST nvmf_host_multipath 00:22:19.426 ************************************ 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:19.426 * Looking for test storage... 00:22:19.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.426 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.427 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.427 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.427 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.427 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.427 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.427 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:22:19.427 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:22:19.427 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.427 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.427 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:19.427 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.427 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:19.685 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:19.686 Cannot find device "nvmf_tgt_br" 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:19.686 Cannot find device "nvmf_tgt_br2" 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:19.686 Cannot find device "nvmf_tgt_br" 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:19.686 Cannot find device "nvmf_tgt_br2" 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:19.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:19.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:19.686 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:19.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:22:19.945 00:22:19.945 --- 10.0.0.2 ping statistics --- 00:22:19.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.945 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:19.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:19.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:22:19.945 00:22:19.945 --- 10.0.0.3 ping statistics --- 00:22:19.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.945 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:19.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:19.945 00:22:19.945 --- 10.0.0.1 ping statistics --- 00:22:19.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.945 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:19.945 05:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:19.945 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:22:19.945 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.945 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:19.945 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:19.945 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=93904 00:22:19.945 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:19.945 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 93904 00:22:19.945 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 93904 ']' 00:22:19.945 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.945 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.945 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.945 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.945 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:19.945 [2024-07-25 05:28:24.065419] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:22:19.945 [2024-07-25 05:28:24.065524] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.203 [2024-07-25 05:28:24.201079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:20.203 [2024-07-25 05:28:24.281890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.203 [2024-07-25 05:28:24.281945] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.203 [2024-07-25 05:28:24.281970] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.203 [2024-07-25 05:28:24.281979] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.203 [2024-07-25 05:28:24.281986] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.203 [2024-07-25 05:28:24.282077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.203 [2024-07-25 05:28:24.282090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.203 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:20.203 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:22:20.203 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:20.203 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:20.203 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:20.461 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.461 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=93904 00:22:20.461 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:20.719 [2024-07-25 05:28:24.659791] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.719 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:20.977 Malloc0 00:22:20.977 05:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:21.236 05:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:21.494 05:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:21.753 [2024-07-25 05:28:25.793226] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.753 05:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:22.011 [2024-07-25 05:28:26.073354] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:22.011 05:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=93994 00:22:22.011 05:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:22.011 05:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:22.011 05:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 93994 /var/tmp/bdevperf.sock 00:22:22.011 05:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 93994 ']' 00:22:22.011 05:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.011 05:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:22.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.011 05:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.011 05:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:22.011 05:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:22.578 05:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:22.578 05:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:22:22.578 05:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:22.578 05:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:23.144 Nvme0n1 00:22:23.144 05:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:23.402 Nvme0n1 00:22:23.402 05:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:22:23.402 05:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:24.775 05:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:22:24.775 05:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:24.775 05:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:25.032 05:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:22:25.032 05:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94072 00:22:25.032 05:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93904 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:25.032 05:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:31.597 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:31.597 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:31.597 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:31.597 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:31.597 Attaching 4 probes... 00:22:31.597 @path[10.0.0.2, 4421]: 16731 00:22:31.597 @path[10.0.0.2, 4421]: 17494 00:22:31.597 @path[10.0.0.2, 4421]: 17360 00:22:31.597 @path[10.0.0.2, 4421]: 17259 00:22:31.597 @path[10.0.0.2, 4421]: 17332 00:22:31.597 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:31.597 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:31.597 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:31.597 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:31.597 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:31.597 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:31.597 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94072 00:22:31.597 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:31.597 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:22:31.597 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:31.597 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:31.854 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:22:31.854 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93904 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:31.854 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94205 00:22:31.854 05:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:38.413 05:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:38.413 05:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:38.413 05:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:38.413 05:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:38.413 Attaching 4 probes... 00:22:38.413 @path[10.0.0.2, 4420]: 16725 00:22:38.413 @path[10.0.0.2, 4420]: 17168 00:22:38.413 @path[10.0.0.2, 4420]: 17459 00:22:38.413 @path[10.0.0.2, 4420]: 17327 00:22:38.413 @path[10.0.0.2, 4420]: 16949 00:22:38.413 05:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:38.413 05:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:38.413 05:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:38.413 05:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:38.413 05:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:38.413 05:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:38.413 05:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94205 00:22:38.413 05:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:38.413 05:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:22:38.413 05:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:38.413 05:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:38.669 05:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:38.669 05:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94340 00:22:38.669 05:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:38.669 05:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93904 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:45.226 05:28:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:45.226 05:28:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:45.226 05:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:45.226 05:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:45.226 Attaching 4 probes... 00:22:45.226 @path[10.0.0.2, 4421]: 12676 00:22:45.226 @path[10.0.0.2, 4421]: 16855 00:22:45.226 @path[10.0.0.2, 4421]: 16967 00:22:45.226 @path[10.0.0.2, 4421]: 16646 00:22:45.226 @path[10.0.0.2, 4421]: 16956 00:22:45.226 05:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:45.226 05:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:45.226 05:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:45.226 05:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:45.226 05:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:45.226 05:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:45.226 05:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94340 00:22:45.226 05:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:45.226 05:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:22:45.226 05:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:45.226 05:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:45.484 05:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:22:45.484 05:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94466 00:22:45.484 05:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93904 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:45.484 05:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:52.043 05:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:52.043 05:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:52.043 05:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:52.043 05:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:52.043 Attaching 4 probes... 00:22:52.043 00:22:52.043 00:22:52.043 00:22:52.043 00:22:52.043 00:22:52.043 05:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:52.043 05:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:52.043 05:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:52.043 05:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:52.043 05:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:52.043 05:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:52.043 05:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94466 00:22:52.043 05:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:52.043 05:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:52.043 05:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:52.043 05:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:52.302 05:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:52.302 05:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94597 00:22:52.302 05:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93904 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:52.302 05:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:58.862 05:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:58.862 05:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:58.862 05:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:58.862 05:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:58.862 Attaching 4 probes... 00:22:58.862 @path[10.0.0.2, 4421]: 15906 00:22:58.862 @path[10.0.0.2, 4421]: 16947 00:22:58.862 @path[10.0.0.2, 4421]: 17098 00:22:58.862 @path[10.0.0.2, 4421]: 17091 00:22:58.862 @path[10.0.0.2, 4421]: 17040 00:22:58.862 05:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:58.862 05:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:58.862 05:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:58.862 05:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:58.862 05:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:58.862 05:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:58.862 05:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94597 00:22:58.862 05:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:58.862 05:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:58.862 [2024-07-25 05:29:02.974841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.862 [2024-07-25 05:29:02.974874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.862 [2024-07-25 05:29:02.974884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.862 [2024-07-25 05:29:02.974893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.862 [2024-07-25 05:29:02.974901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.862 [2024-07-25 05:29:02.974909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.862 [2024-07-25 05:29:02.974918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.862 [2024-07-25 05:29:02.974926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.862 [2024-07-25 05:29:02.974935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.862 [2024-07-25 05:29:02.974943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.862 [2024-07-25 05:29:02.974965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.862 [2024-07-25 05:29:02.974975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.862 [2024-07-25 05:29:02.974985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.862 [2024-07-25 05:29:02.974993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.862 [2024-07-25 05:29:02.975001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.862 [2024-07-25 05:29:02.975010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.862 [2024-07-25 05:29:02.975018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 [2024-07-25 05:29:02.975347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d330 is same with the state(5) to be set 00:22:58.863 05:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:23:00.242 05:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:00.242 05:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94737 00:23:00.242 05:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93904 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:00.242 05:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:06.803 05:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:06.803 05:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:06.803 05:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:23:06.803 05:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:06.803 Attaching 4 probes... 00:23:06.803 @path[10.0.0.2, 4420]: 16349 00:23:06.803 @path[10.0.0.2, 4420]: 16647 00:23:06.803 @path[10.0.0.2, 4420]: 16807 00:23:06.803 @path[10.0.0.2, 4420]: 16894 00:23:06.803 @path[10.0.0.2, 4420]: 16924 00:23:06.803 05:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:06.803 05:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:06.803 05:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:06.803 05:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:23:06.803 05:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:06.803 05:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:06.803 05:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94737 00:23:06.803 05:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:06.803 05:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:06.803 [2024-07-25 05:29:10.593305] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:06.803 05:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:06.803 05:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:23:13.362 05:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:23:13.362 05:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94925 00:23:13.362 05:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:13.362 05:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93904 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:19.929 05:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:19.929 05:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:19.929 Attaching 4 probes... 00:23:19.929 @path[10.0.0.2, 4421]: 15448 00:23:19.929 @path[10.0.0.2, 4421]: 16626 00:23:19.929 @path[10.0.0.2, 4421]: 16033 00:23:19.929 @path[10.0.0.2, 4421]: 16823 00:23:19.929 @path[10.0.0.2, 4421]: 16605 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94925 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 93994 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 93994 ']' 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 93994 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93994 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:19.929 killing process with pid 93994 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93994' 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 93994 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 93994 00:23:19.929 Connection closed with partial response: 00:23:19.929 00:23:19.929 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 93994 00:23:19.929 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:19.929 [2024-07-25 05:28:26.163891] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:23:19.929 [2024-07-25 05:28:26.164012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93994 ] 00:23:19.929 [2024-07-25 05:28:26.298541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.929 [2024-07-25 05:28:26.368146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.929 Running I/O for 90 seconds... 00:23:19.929 [2024-07-25 05:28:35.916011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.929 [2024-07-25 05:28:35.916093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.916973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.916991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:19.929 [2024-07-25 05:28:35.917023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.929 [2024-07-25 05:28:35.917040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.917062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.917077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.917099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.917114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.917136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.917151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.917172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.917188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.917210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.917225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.917246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.917262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.917283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.917298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.917321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.917337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.918269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.918315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.918352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.918407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.918444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.918480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.918517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.918553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.918590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.918626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.918663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.918699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.918737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.918773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.918810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.918856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.930 [2024-07-25 05:28:35.918895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.930 [2024-07-25 05:28:35.918932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.918969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.930 [2024-07-25 05:28:35.918989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.919011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.930 [2024-07-25 05:28:35.919027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.919049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.930 [2024-07-25 05:28:35.919064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.919086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.930 [2024-07-25 05:28:35.919112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.919136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.930 [2024-07-25 05:28:35.919152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.919174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.930 [2024-07-25 05:28:35.919189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.919211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.930 [2024-07-25 05:28:35.919226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.919247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.919262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.919284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.919299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.919320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.919343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.919366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.930 [2024-07-25 05:28:35.919382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.930 [2024-07-25 05:28:35.919403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.919419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.919440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.919455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.919478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.919493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.919514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.919530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.919552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.919567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.919589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.919604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.919626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.919641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.919663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.919678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.919699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.919714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.919736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.919751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.919773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.919788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.919817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.919833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.919855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.919870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.919891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.919906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.919928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.919944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.919977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.919994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.920022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.920039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.920061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.920076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.920098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.920113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.920134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.920150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.920171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.920187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.920208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.920224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.920245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.920261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.920290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.920306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.920327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.920343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.920365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.920380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.920401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.920416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.920438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.920453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.920475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.920490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.921193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.921220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.921248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.921265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.921287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.921303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.921327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.921343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.921366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.921381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.921403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.921419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.921441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.921467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.921490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.921506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.921527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.931 [2024-07-25 05:28:35.921543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:19.931 [2024-07-25 05:28:35.921564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.921580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.921601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.921616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.921638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.921653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.921675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.921690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.921711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.921726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.921748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.921763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.921784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.921799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.921820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.921836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.921857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.921872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.921894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.921915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.921940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.921971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.921997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.922035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.922073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.922110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.922147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.922183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.922220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.922257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.922293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.922330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.922366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.922411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.922449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.922485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.922522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.922561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.922599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:35.922637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:35.922652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:42.446788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:42.446847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:42.447205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:42.447234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:42.447266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:42.447296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:42.447329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:42.447345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:42.447367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:42.447382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:42.447428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:42.447445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:42.447467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:42.447482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:42.447504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:42.447519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:42.447541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:42.447556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:19.932 [2024-07-25 05:28:42.447577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.932 [2024-07-25 05:28:42.447593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.447614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.447630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.447651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.447666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.447687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.447703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.447725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.447740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.447761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.447776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.447798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.447814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.447835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.447851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.447872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.447895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.447918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.447934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.447970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.447990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.448013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.448028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.448050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.448066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.448087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.448102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.448124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.448139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.448161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.448176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.448198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.448214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.448236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.448251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.448279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.448294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.448315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.448331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.448352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.448375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.448398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.448413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.448435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.933 [2024-07-25 05:28:42.448451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.448473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.933 [2024-07-25 05:28:42.448488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.448510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.933 [2024-07-25 05:28:42.448525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.450629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.933 [2024-07-25 05:28:42.450667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.450704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.933 [2024-07-25 05:28:42.450722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.450751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.933 [2024-07-25 05:28:42.450767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.450797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.933 [2024-07-25 05:28:42.450812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.450841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.933 [2024-07-25 05:28:42.450856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.450885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.933 [2024-07-25 05:28:42.450900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.450929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.933 [2024-07-25 05:28:42.450945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:19.933 [2024-07-25 05:28:42.450989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.451942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.451984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.452002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.452031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.452047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.452075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.452091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.452119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.452135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.452172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.452188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.452220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.452236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.452264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.452280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.452308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.452324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.452353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.452368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:42.452398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.934 [2024-07-25 05:28:42.452413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:49.516051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.934 [2024-07-25 05:28:49.516122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:49.516182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.934 [2024-07-25 05:28:49.516205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:49.516228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.934 [2024-07-25 05:28:49.516243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:49.516265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.934 [2024-07-25 05:28:49.516280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:49.516302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.934 [2024-07-25 05:28:49.516318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:49.516339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.934 [2024-07-25 05:28:49.516354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:49.516376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.934 [2024-07-25 05:28:49.516423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:19.934 [2024-07-25 05:28:49.516447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.934 [2024-07-25 05:28:49.516463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.516550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.516575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.516602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.516618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.516640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.516655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.516677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.516692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.516715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.516730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.516752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.516767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.516789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.516805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.516827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.516843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.516899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.516919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.516945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.516976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.517002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.517030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.517055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.517071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.517095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.517111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.517133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.517149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.517172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.517187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.517211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.517226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.518409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.518436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.518463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.518481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.518504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.518520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.518544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.518560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.518584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.518599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.518623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.935 [2024-07-25 05:28:49.518639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.518663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.935 [2024-07-25 05:28:49.518678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.518715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.935 [2024-07-25 05:28:49.518733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.518758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.935 [2024-07-25 05:28:49.518773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.518797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.935 [2024-07-25 05:28:49.518812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.518835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.935 [2024-07-25 05:28:49.518851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.518875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.935 [2024-07-25 05:28:49.518891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.518914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.935 [2024-07-25 05:28:49.518930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.518968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.935 [2024-07-25 05:28:49.518988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.519013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.935 [2024-07-25 05:28:49.519029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.519052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.935 [2024-07-25 05:28:49.519068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.519092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.935 [2024-07-25 05:28:49.519107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.519144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.935 [2024-07-25 05:28:49.519160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.519184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.935 [2024-07-25 05:28:49.519199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.519232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.935 [2024-07-25 05:28:49.519249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.519273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.935 [2024-07-25 05:28:49.519288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:19.935 [2024-07-25 05:28:49.519312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.935 [2024-07-25 05:28:49.519328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.519352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.519367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.519392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.519408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.519431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.519447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.519471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.519486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.519510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.519526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.519562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.519578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.519602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.519617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.519641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.936 [2024-07-25 05:28:49.519657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.519680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.519696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.519726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.519743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.519766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.519783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.519807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.519822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.519946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.519984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.520948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.520988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.521005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.521031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.521047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.521073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.936 [2024-07-25 05:28:49.521089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:19.936 [2024-07-25 05:28:49.521116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.521132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.521174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.937 [2024-07-25 05:28:49.521215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.937 [2024-07-25 05:28:49.521257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.937 [2024-07-25 05:28:49.521299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.937 [2024-07-25 05:28:49.521341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.937 [2024-07-25 05:28:49.521383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.937 [2024-07-25 05:28:49.521432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.937 [2024-07-25 05:28:49.521535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.937 [2024-07-25 05:28:49.521578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.937 [2024-07-25 05:28:49.521620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.937 [2024-07-25 05:28:49.521663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.521705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.521747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.521789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.521830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.521873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.521915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.521941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.521974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.937 [2024-07-25 05:28:49.522826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:19.937 [2024-07-25 05:28:49.522852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:28:49.522868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:28:49.522894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:28:49.522909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:28:49.522935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:28:49.522975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:28:49.523007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:28:49.523023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:28:49.523050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:28:49.523065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:28:49.523092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:28:49.523108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.974763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.938 [2024-07-25 05:29:02.974836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.974894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.938 [2024-07-25 05:29:02.974916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.974939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.938 [2024-07-25 05:29:02.974970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.974995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:32408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.938 [2024-07-25 05:29:02.975011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.938 [2024-07-25 05:29:02.975049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.938 [2024-07-25 05:29:02.975086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.938 [2024-07-25 05:29:02.975123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.975174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.975211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.975248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.975285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.975321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.975388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.975426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.938 [2024-07-25 05:29:02.975463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.975498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.975535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.975812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.975846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.975876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.975905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.975934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.975979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.975995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.976009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.976024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.976038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.976066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.976081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.976096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.976110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.938 [2024-07-25 05:29:02.976126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.938 [2024-07-25 05:29:02.976140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.976961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.976980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.939 [2024-07-25 05:29:02.976994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.977010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.939 [2024-07-25 05:29:02.977024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.977039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.939 [2024-07-25 05:29:02.977053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.977068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.939 [2024-07-25 05:29:02.977082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.977098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.939 [2024-07-25 05:29:02.977111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.977127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.939 [2024-07-25 05:29:02.977145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.977161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.939 [2024-07-25 05:29:02.977174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.977190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.977203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.977225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.977240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.977256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.977269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.977285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.977299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.977314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.939 [2024-07-25 05:29:02.977328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.939 [2024-07-25 05:29:02.977343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.940 [2024-07-25 05:29:02.977687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.940 [2024-07-25 05:29:02.977718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.977962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.977986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.940 [2024-07-25 05:29:02.978548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.940 [2024-07-25 05:29:02.978563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.978577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.978593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.978607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.978622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.978636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.978651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.978665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.978680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.978694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.978710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.978724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.978742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.978760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.978777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.978791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.978811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.978825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.978841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.978855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.978870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.978884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.978900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.978914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.978929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.978943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.978970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.978986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.979002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.979015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.979031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.979044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.979060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.979073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.979089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.979103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.979118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.979151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.979169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:32368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.979184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.979199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.979213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.979505] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2536250 was disconnected and freed. reset controller. 00:23:19.941 [2024-07-25 05:29:02.980662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:19.941 [2024-07-25 05:29:02.980747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.941 [2024-07-25 05:29:02.980770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.941 [2024-07-25 05:29:02.980805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b29c0 (9): Bad file descriptor 00:23:19.941 [2024-07-25 05:29:02.980972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.941 [2024-07-25 05:29:02.981005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b29c0 with addr=10.0.0.2, port=4421 00:23:19.941 [2024-07-25 05:29:02.981022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b29c0 is same with the state(5) to be set 00:23:19.941 [2024-07-25 05:29:02.981049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b29c0 (9): Bad file descriptor 00:23:19.941 [2024-07-25 05:29:02.981071] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:19.941 [2024-07-25 05:29:02.981086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:19.941 [2024-07-25 05:29:02.981099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:19.941 [2024-07-25 05:29:02.981125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.941 [2024-07-25 05:29:02.981140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:19.941 [2024-07-25 05:29:13.080165] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:19.941 Received shutdown signal, test time was about 55.541426 seconds 00:23:19.941 00:23:19.941 Latency(us) 00:23:19.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.941 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:19.941 Verification LBA range: start 0x0 length 0x4000 00:23:19.941 Nvme0n1 : 55.54 7199.13 28.12 0.00 0.00 17748.57 142.43 7046430.72 00:23:19.941 =================================================================================================================== 00:23:19.941 Total : 7199.13 28.12 0.00 0.00 17748.57 142.43 7046430.72 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:19.941 rmmod nvme_tcp 00:23:19.941 rmmod nvme_fabrics 00:23:19.941 rmmod nvme_keyring 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 93904 ']' 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 93904 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 93904 ']' 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 93904 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93904 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:19.941 killing process with pid 93904 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93904' 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 93904 00:23:19.941 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 93904 00:23:19.942 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:19.942 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:19.942 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:19.942 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:19.942 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:19.942 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.942 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.942 05:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.942 05:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:19.942 ************************************ 00:23:19.942 END TEST nvmf_host_multipath 00:23:19.942 ************************************ 00:23:19.942 00:23:19.942 real 1m0.516s 00:23:19.942 user 2m52.876s 00:23:19.942 sys 0m12.926s 00:23:19.942 05:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:19.942 05:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:19.942 05:29:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:19.942 05:29:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:19.942 05:29:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:19.942 05:29:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.942 ************************************ 00:23:19.942 START TEST nvmf_timeout 00:23:19.942 ************************************ 00:23:19.942 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:20.201 * Looking for test storage... 00:23:20.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.201 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:20.202 Cannot find device "nvmf_tgt_br" 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:20.202 Cannot find device "nvmf_tgt_br2" 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:20.202 Cannot find device "nvmf_tgt_br" 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:20.202 Cannot find device "nvmf_tgt_br2" 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:20.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:20.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:20.202 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:20.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:23:20.461 00:23:20.461 --- 10.0.0.2 ping statistics --- 00:23:20.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.461 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:20.461 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:20.461 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:23:20.461 00:23:20.461 --- 10.0.0.3 ping statistics --- 00:23:20.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.461 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:20.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:23:20.461 00:23:20.461 --- 10.0.0.1 ping statistics --- 00:23:20.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.461 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=95246 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 95246 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 95246 ']' 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.461 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:20.462 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.462 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:20.462 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:20.462 [2024-07-25 05:29:24.592170] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:23:20.462 [2024-07-25 05:29:24.592254] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.720 [2024-07-25 05:29:24.727306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:20.720 [2024-07-25 05:29:24.795093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.720 [2024-07-25 05:29:24.795186] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.720 [2024-07-25 05:29:24.795204] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.720 [2024-07-25 05:29:24.795214] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.720 [2024-07-25 05:29:24.795222] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.720 [2024-07-25 05:29:24.795356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.720 [2024-07-25 05:29:24.795367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.720 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:20.720 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:20.720 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:20.720 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:20.720 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:20.978 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.979 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:20.979 05:29:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:21.242 [2024-07-25 05:29:25.176176] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.242 05:29:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:21.501 Malloc0 00:23:21.501 05:29:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:21.759 05:29:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:22.017 05:29:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:22.275 [2024-07-25 05:29:26.316020] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.275 05:29:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:22.275 05:29:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=95324 00:23:22.275 05:29:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 95324 /var/tmp/bdevperf.sock 00:23:22.275 05:29:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 95324 ']' 00:23:22.275 05:29:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.275 05:29:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:22.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.275 05:29:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.275 05:29:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:22.275 05:29:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:22.275 [2024-07-25 05:29:26.377018] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:23:22.275 [2024-07-25 05:29:26.377123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95324 ] 00:23:22.534 [2024-07-25 05:29:26.512076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.534 [2024-07-25 05:29:26.602053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.534 05:29:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:22.534 05:29:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:22.534 05:29:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:22.792 05:29:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:23.358 NVMe0n1 00:23:23.358 05:29:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=95355 00:23:23.358 05:29:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:23.358 05:29:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:23:23.358 Running I/O for 10 seconds... 00:23:24.294 05:29:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.555 [2024-07-25 05:29:28.559236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.555 [2024-07-25 05:29:28.559582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.556 [2024-07-25 05:29:28.559590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.556 [2024-07-25 05:29:28.559598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.556 [2024-07-25 05:29:28.559608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.556 [2024-07-25 05:29:28.559618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.556 [2024-07-25 05:29:28.559627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.556 [2024-07-25 05:29:28.559635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.556 [2024-07-25 05:29:28.559644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.556 [2024-07-25 05:29:28.559652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.556 [2024-07-25 05:29:28.559660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.556 [2024-07-25 05:29:28.559668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.556 [2024-07-25 05:29:28.559676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448730 is same with the state(5) to be set 00:23:24.556 [2024-07-25 05:29:28.560910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.560965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.560989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.556 [2024-07-25 05:29:28.561540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.556 [2024-07-25 05:29:28.561560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.556 [2024-07-25 05:29:28.561581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.556 [2024-07-25 05:29:28.561602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.556 [2024-07-25 05:29:28.561623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.556 [2024-07-25 05:29:28.561644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.556 [2024-07-25 05:29:28.561655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.556 [2024-07-25 05:29:28.561668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.561679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.561688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.561699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.561708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.561720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.561729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.561740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.561749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.561760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.561769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.561780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.561790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.561801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.561810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.561821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.561830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.561841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.561851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.561862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.561871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.561882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.561892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.561903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.561912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.561923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.561934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.561945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.561965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.561978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.561988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.561999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.562008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.562028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.562048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.562069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.562090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.562110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.562130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.562151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.562175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.557 [2024-07-25 05:29:28.562195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.557 [2024-07-25 05:29:28.562216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.557 [2024-07-25 05:29:28.562237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.557 [2024-07-25 05:29:28.562257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.557 [2024-07-25 05:29:28.562279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.557 [2024-07-25 05:29:28.562300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.557 [2024-07-25 05:29:28.562320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.557 [2024-07-25 05:29:28.562340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.562362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.562382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.562403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.562424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.562444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.562464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.557 [2024-07-25 05:29:28.562485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.557 [2024-07-25 05:29:28.562496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.562988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.562999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.558 [2024-07-25 05:29:28.563008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.563019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.558 [2024-07-25 05:29:28.563029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.563040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.558 [2024-07-25 05:29:28.563049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.563060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.558 [2024-07-25 05:29:28.563069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.563081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.558 [2024-07-25 05:29:28.563091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.563102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.558 [2024-07-25 05:29:28.563111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.563122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.558 [2024-07-25 05:29:28.563132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.563143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.558 [2024-07-25 05:29:28.563152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.563174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.558 [2024-07-25 05:29:28.563184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.563195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.558 [2024-07-25 05:29:28.563207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.563222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.558 [2024-07-25 05:29:28.563232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.563244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.563253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.563265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.563274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.563285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.563295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.563306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.563316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.563327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.563336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.558 [2024-07-25 05:29:28.563347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.558 [2024-07-25 05:29:28.563356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.559 [2024-07-25 05:29:28.563367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.559 [2024-07-25 05:29:28.563376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.559 [2024-07-25 05:29:28.563388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.559 [2024-07-25 05:29:28.563397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.559 [2024-07-25 05:29:28.563408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.559 [2024-07-25 05:29:28.563417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.559 [2024-07-25 05:29:28.563437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.559 [2024-07-25 05:29:28.563446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.559 [2024-07-25 05:29:28.563458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.559 [2024-07-25 05:29:28.563467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.559 [2024-07-25 05:29:28.563478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.559 [2024-07-25 05:29:28.563488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.559 [2024-07-25 05:29:28.563499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.559 [2024-07-25 05:29:28.563508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.559 [2024-07-25 05:29:28.563519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.559 [2024-07-25 05:29:28.563528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.559 [2024-07-25 05:29:28.563539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.559 [2024-07-25 05:29:28.563548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.559 [2024-07-25 05:29:28.563562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.559 [2024-07-25 05:29:28.563571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.559 [2024-07-25 05:29:28.563585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.559 [2024-07-25 05:29:28.563595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.559 [2024-07-25 05:29:28.563605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.559 [2024-07-25 05:29:28.563615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.559 [2024-07-25 05:29:28.563626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.559 [2024-07-25 05:29:28.563635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.559 [2024-07-25 05:29:28.563646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.559 [2024-07-25 05:29:28.563655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.559 [2024-07-25 05:29:28.563682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.559 [2024-07-25 05:29:28.563691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.559 [2024-07-25 05:29:28.563700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83544 len:8 PRP1 0x0 PRP2 0x0 00:23:24.559 [2024-07-25 05:29:28.563708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.559 [2024-07-25 05:29:28.563752] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc4b8d0 was disconnected and freed. reset controller. 00:23:24.559 [2024-07-25 05:29:28.564022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:24.559 [2024-07-25 05:29:28.564102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbde240 (9): Bad file descriptor 00:23:24.559 [2024-07-25 05:29:28.564202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.559 [2024-07-25 05:29:28.564223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbde240 with addr=10.0.0.2, port=4420 00:23:24.559 [2024-07-25 05:29:28.564234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbde240 is same with the state(5) to be set 00:23:24.559 [2024-07-25 05:29:28.564252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbde240 (9): Bad file descriptor 00:23:24.559 [2024-07-25 05:29:28.564268] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:24.559 [2024-07-25 05:29:28.564277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:24.559 [2024-07-25 05:29:28.564287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:24.559 [2024-07-25 05:29:28.564307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.559 [2024-07-25 05:29:28.564318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:24.559 05:29:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:23:26.458 [2024-07-25 05:29:30.564570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.458 [2024-07-25 05:29:30.564636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbde240 with addr=10.0.0.2, port=4420 00:23:26.458 [2024-07-25 05:29:30.564652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbde240 is same with the state(5) to be set 00:23:26.458 [2024-07-25 05:29:30.564678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbde240 (9): Bad file descriptor 00:23:26.458 [2024-07-25 05:29:30.564709] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:26.458 [2024-07-25 05:29:30.564721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:26.458 [2024-07-25 05:29:30.564732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:26.458 [2024-07-25 05:29:30.564758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.458 [2024-07-25 05:29:30.564770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:26.458 05:29:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:23:26.458 05:29:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.458 05:29:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:26.715 05:29:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:23:26.715 05:29:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:23:26.715 05:29:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:26.715 05:29:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:27.280 05:29:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:23:27.280 05:29:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:23:28.651 [2024-07-25 05:29:32.565008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.651 [2024-07-25 05:29:32.565069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbde240 with addr=10.0.0.2, port=4420 00:23:28.651 [2024-07-25 05:29:32.565085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbde240 is same with the state(5) to be set 00:23:28.651 [2024-07-25 05:29:32.565112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbde240 (9): Bad file descriptor 00:23:28.651 [2024-07-25 05:29:32.565131] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:28.651 [2024-07-25 05:29:32.565141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:28.651 [2024-07-25 05:29:32.565151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:28.651 [2024-07-25 05:29:32.565177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.651 [2024-07-25 05:29:32.565190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:30.549 [2024-07-25 05:29:34.565332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:30.549 [2024-07-25 05:29:34.565378] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:30.549 [2024-07-25 05:29:34.565391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:30.549 [2024-07-25 05:29:34.565401] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:30.549 [2024-07-25 05:29:34.565427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.483 00:23:31.483 Latency(us) 00:23:31.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.483 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:31.483 Verification LBA range: start 0x0 length 0x4000 00:23:31.483 NVMe0n1 : 8.15 1265.83 4.94 15.71 0.00 99727.66 2115.03 7015926.69 00:23:31.483 =================================================================================================================== 00:23:31.483 Total : 1265.83 4.94 15.71 0.00 99727.66 2115.03 7015926.69 00:23:31.483 0 00:23:32.049 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:23:32.049 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:32.049 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:32.307 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:23:32.307 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:23:32.307 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:32.307 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:32.565 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:23:32.565 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 95355 00:23:32.565 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 95324 00:23:32.565 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 95324 ']' 00:23:32.565 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 95324 00:23:32.565 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:32.565 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:32.565 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95324 00:23:32.823 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:32.823 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:32.823 killing process with pid 95324 00:23:32.823 Received shutdown signal, test time was about 9.338234 seconds 00:23:32.823 00:23:32.823 Latency(us) 00:23:32.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.823 =================================================================================================================== 00:23:32.823 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.823 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95324' 00:23:32.823 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 95324 00:23:32.823 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 95324 00:23:32.823 05:29:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.083 [2024-07-25 05:29:37.167803] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.083 05:29:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:33.083 05:29:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=95511 00:23:33.083 05:29:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 95511 /var/tmp/bdevperf.sock 00:23:33.083 05:29:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 95511 ']' 00:23:33.083 05:29:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.083 05:29:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:33.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.083 05:29:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.083 05:29:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:33.083 05:29:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:33.083 [2024-07-25 05:29:37.239611] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:23:33.083 [2024-07-25 05:29:37.239707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95511 ] 00:23:33.342 [2024-07-25 05:29:37.379289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.342 [2024-07-25 05:29:37.448316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.276 05:29:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:34.276 05:29:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:34.276 05:29:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:34.534 05:29:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:23:34.792 NVMe0n1 00:23:34.792 05:29:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=95559 00:23:34.792 05:29:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:34.792 05:29:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:23:35.051 Running I/O for 10 seconds... 00:23:35.988 05:29:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:35.988 [2024-07-25 05:29:40.131528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.988 [2024-07-25 05:29:40.131582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.131964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0e10 is same with the state(5) to be set 00:23:35.989 [2024-07-25 05:29:40.132399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.989 [2024-07-25 05:29:40.132429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.989 [2024-07-25 05:29:40.132460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.989 [2024-07-25 05:29:40.132471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.989 [2024-07-25 05:29:40.132483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.989 [2024-07-25 05:29:40.132492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.989 [2024-07-25 05:29:40.132504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.989 [2024-07-25 05:29:40.132514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.989 [2024-07-25 05:29:40.132525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.989 [2024-07-25 05:29:40.132534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.989 [2024-07-25 05:29:40.132546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.989 [2024-07-25 05:29:40.132555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.989 [2024-07-25 05:29:40.132566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.989 [2024-07-25 05:29:40.132576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.989 [2024-07-25 05:29:40.132587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.989 [2024-07-25 05:29:40.132597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.989 [2024-07-25 05:29:40.132608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.989 [2024-07-25 05:29:40.132617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.989 [2024-07-25 05:29:40.132628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.989 [2024-07-25 05:29:40.132638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.989 [2024-07-25 05:29:40.132649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.989 [2024-07-25 05:29:40.132658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.989 [2024-07-25 05:29:40.132670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.989 [2024-07-25 05:29:40.132679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.989 [2024-07-25 05:29:40.132690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.989 [2024-07-25 05:29:40.132700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.989 [2024-07-25 05:29:40.132711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.989 [2024-07-25 05:29:40.132720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.989 [2024-07-25 05:29:40.132731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.989 [2024-07-25 05:29:40.132740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.989 [2024-07-25 05:29:40.132751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.989 [2024-07-25 05:29:40.132760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.989 [2024-07-25 05:29:40.132772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.989 [2024-07-25 05:29:40.132782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.132794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.132803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.132814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.132823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.132834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.132844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.132856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.132866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.132877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.132886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.132897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.132907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.132918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.132928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.132939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.132948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.132976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.132987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.132998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.990 [2024-07-25 05:29:40.133441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.990 [2024-07-25 05:29:40.133463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.990 [2024-07-25 05:29:40.133484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.990 [2024-07-25 05:29:40.133504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.990 [2024-07-25 05:29:40.133524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.990 [2024-07-25 05:29:40.133545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.990 [2024-07-25 05:29:40.133566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.990 [2024-07-25 05:29:40.133589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.990 [2024-07-25 05:29:40.133600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.133983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.133993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.991 [2024-07-25 05:29:40.134411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.991 [2024-07-25 05:29:40.134420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.992 [2024-07-25 05:29:40.134931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.992 [2024-07-25 05:29:40.134961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.992 [2024-07-25 05:29:40.134983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.134994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.992 [2024-07-25 05:29:40.135004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.135015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.992 [2024-07-25 05:29:40.135024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.135035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.992 [2024-07-25 05:29:40.135045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.135056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.992 [2024-07-25 05:29:40.135065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.135077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.992 [2024-07-25 05:29:40.135086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.135111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.992 [2024-07-25 05:29:40.135121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.992 [2024-07-25 05:29:40.135130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82440 len:8 PRP1 0x0 PRP2 0x0 00:23:35.992 [2024-07-25 05:29:40.135139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.992 [2024-07-25 05:29:40.135192] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b1fb20 was disconnected and freed. reset controller. 00:23:35.992 [2024-07-25 05:29:40.135447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.992 [2024-07-25 05:29:40.135529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab2240 (9): Bad file descriptor 00:23:35.992 [2024-07-25 05:29:40.135628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.992 [2024-07-25 05:29:40.135657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab2240 with addr=10.0.0.2, port=4420 00:23:35.992 [2024-07-25 05:29:40.135669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab2240 is same with the state(5) to be set 00:23:35.992 [2024-07-25 05:29:40.135688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab2240 (9): Bad file descriptor 00:23:35.992 [2024-07-25 05:29:40.135703] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.992 [2024-07-25 05:29:40.135713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.992 [2024-07-25 05:29:40.135723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.992 [2024-07-25 05:29:40.135744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.992 [2024-07-25 05:29:40.135755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.993 05:29:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:23:37.367 [2024-07-25 05:29:41.135897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:37.367 [2024-07-25 05:29:41.135976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab2240 with addr=10.0.0.2, port=4420 00:23:37.367 [2024-07-25 05:29:41.135995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab2240 is same with the state(5) to be set 00:23:37.367 [2024-07-25 05:29:41.136023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab2240 (9): Bad file descriptor 00:23:37.367 [2024-07-25 05:29:41.136047] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:37.367 [2024-07-25 05:29:41.136057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:37.367 [2024-07-25 05:29:41.136067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:37.367 [2024-07-25 05:29:41.136092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:37.367 [2024-07-25 05:29:41.136104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.367 05:29:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:37.367 [2024-07-25 05:29:41.417844] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.367 05:29:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 95559 00:23:38.301 [2024-07-25 05:29:42.152444] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:44.857 00:23:44.857 Latency(us) 00:23:44.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.857 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:44.857 Verification LBA range: start 0x0 length 0x4000 00:23:44.857 NVMe0n1 : 10.01 6253.05 24.43 0.00 0.00 20428.00 2100.13 3019898.88 00:23:44.857 =================================================================================================================== 00:23:44.857 Total : 6253.05 24.43 0.00 0.00 20428.00 2100.13 3019898.88 00:23:44.857 0 00:23:44.857 05:29:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=95677 00:23:44.857 05:29:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:44.857 05:29:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:23:45.115 Running I/O for 10 seconds... 00:23:46.050 05:29:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.310 [2024-07-25 05:29:50.265980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.310 [2024-07-25 05:29:50.266035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.310 [2024-07-25 05:29:50.266047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.310 [2024-07-25 05:29:50.266057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.310 [2024-07-25 05:29:50.266066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.310 [2024-07-25 05:29:50.266074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.310 [2024-07-25 05:29:50.266082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.310 [2024-07-25 05:29:50.266090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.310 [2024-07-25 05:29:50.266099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.310 [2024-07-25 05:29:50.266107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.266369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f310 is same with the state(5) to be set 00:23:46.311 [2024-07-25 05:29:50.267695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.267738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.267761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.267772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.267784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.267793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.267805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.267814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.267825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.267834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.267846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.267855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.267867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.267876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.267887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.267896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.267907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.267917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.267928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.267937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.267948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.267970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.267984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.267998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.268016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.268026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.268037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.268049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.268063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.268073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.268084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.268093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.268104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.268114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.268126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.268135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.268145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.268155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.268165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.268174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.268186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.268200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.268219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.268234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.268252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.311 [2024-07-25 05:29:50.268267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.311 [2024-07-25 05:29:50.268284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.312 [2024-07-25 05:29:50.268553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.312 [2024-07-25 05:29:50.268573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.312 [2024-07-25 05:29:50.268593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.312 [2024-07-25 05:29:50.268613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.312 [2024-07-25 05:29:50.268634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.312 [2024-07-25 05:29:50.268654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.312 [2024-07-25 05:29:50.268675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.312 [2024-07-25 05:29:50.268695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.268972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.268989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.269005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.269024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.312 [2024-07-25 05:29:50.269036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.269047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.312 [2024-07-25 05:29:50.269056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.269067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.312 [2024-07-25 05:29:50.269081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.269099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.312 [2024-07-25 05:29:50.269115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.269132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.312 [2024-07-25 05:29:50.269143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.269161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.312 [2024-07-25 05:29:50.269173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.269184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.312 [2024-07-25 05:29:50.269193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.269204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.312 [2024-07-25 05:29:50.269218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.269237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.312 [2024-07-25 05:29:50.269251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.312 [2024-07-25 05:29:50.269262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.312 [2024-07-25 05:29:50.269277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.313 [2024-07-25 05:29:50.269768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.313 [2024-07-25 05:29:50.269787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.313 [2024-07-25 05:29:50.269808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.313 [2024-07-25 05:29:50.269828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.313 [2024-07-25 05:29:50.269848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.313 [2024-07-25 05:29:50.269868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.313 [2024-07-25 05:29:50.269888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.313 [2024-07-25 05:29:50.269908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.313 [2024-07-25 05:29:50.269930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.313 [2024-07-25 05:29:50.269963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.269977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.313 [2024-07-25 05:29:50.269986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.270001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.313 [2024-07-25 05:29:50.270016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.270033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.313 [2024-07-25 05:29:50.270048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.270067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.313 [2024-07-25 05:29:50.270082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.270101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.313 [2024-07-25 05:29:50.270115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.270133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.313 [2024-07-25 05:29:50.270149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.270162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.313 [2024-07-25 05:29:50.270171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.270182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.313 [2024-07-25 05:29:50.270192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.313 [2024-07-25 05:29:50.270203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.314 [2024-07-25 05:29:50.270212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.314 [2024-07-25 05:29:50.270232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.314 [2024-07-25 05:29:50.270252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.314 [2024-07-25 05:29:50.270272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.314 [2024-07-25 05:29:50.270292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.314 [2024-07-25 05:29:50.270312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.314 [2024-07-25 05:29:50.270335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.314 [2024-07-25 05:29:50.270356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.314 [2024-07-25 05:29:50.270376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.314 [2024-07-25 05:29:50.270396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.314 [2024-07-25 05:29:50.270416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.314 [2024-07-25 05:29:50.270437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.314 [2024-07-25 05:29:50.270457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.314 [2024-07-25 05:29:50.270477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.314 [2024-07-25 05:29:50.270497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-07-25 05:29:50.270538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76784 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-07-25 05:29:50.270547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-07-25 05:29:50.270568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-07-25 05:29:50.270575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76792 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-07-25 05:29:50.270584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-07-25 05:29:50.270600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-07-25 05:29:50.270608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76800 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-07-25 05:29:50.270616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-07-25 05:29:50.270632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-07-25 05:29:50.270640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76096 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-07-25 05:29:50.270649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-07-25 05:29:50.270668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-07-25 05:29:50.270675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76104 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-07-25 05:29:50.270684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-07-25 05:29:50.270702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-07-25 05:29:50.270710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76112 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-07-25 05:29:50.270719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-07-25 05:29:50.270735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-07-25 05:29:50.270743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76120 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-07-25 05:29:50.270752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-07-25 05:29:50.270768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-07-25 05:29:50.270775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76128 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-07-25 05:29:50.270784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-07-25 05:29:50.270800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-07-25 05:29:50.270808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76136 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-07-25 05:29:50.270816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-07-25 05:29:50.270832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-07-25 05:29:50.270840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76144 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-07-25 05:29:50.270849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:46.314 [2024-07-25 05:29:50.270865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:46.314 [2024-07-25 05:29:50.270872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76152 len:8 PRP1 0x0 PRP2 0x0 00:23:46.314 [2024-07-25 05:29:50.270881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.270923] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b33010 was disconnected and freed. reset controller. 00:23:46.314 [2024-07-25 05:29:50.271013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.314 05:29:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:23:46.314 [2024-07-25 05:29:50.290996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.291040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.314 [2024-07-25 05:29:50.291056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.291069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.314 [2024-07-25 05:29:50.291081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.291093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.314 [2024-07-25 05:29:50.291105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.314 [2024-07-25 05:29:50.291116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab2240 is same with the state(5) to be set 00:23:46.314 [2024-07-25 05:29:50.291448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.314 [2024-07-25 05:29:50.291490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab2240 (9): Bad file descriptor 00:23:46.314 [2024-07-25 05:29:50.291641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.315 [2024-07-25 05:29:50.291675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab2240 with addr=10.0.0.2, port=4420 00:23:46.315 [2024-07-25 05:29:50.291703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab2240 is same with the state(5) to be set 00:23:46.315 [2024-07-25 05:29:50.291728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab2240 (9): Bad file descriptor 00:23:46.315 [2024-07-25 05:29:50.291748] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.315 [2024-07-25 05:29:50.291759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.315 [2024-07-25 05:29:50.291771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.315 [2024-07-25 05:29:50.291796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.315 [2024-07-25 05:29:50.291809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.248 [2024-07-25 05:29:51.291964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.248 [2024-07-25 05:29:51.292036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab2240 with addr=10.0.0.2, port=4420 00:23:47.248 [2024-07-25 05:29:51.292054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab2240 is same with the state(5) to be set 00:23:47.248 [2024-07-25 05:29:51.292080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab2240 (9): Bad file descriptor 00:23:47.248 [2024-07-25 05:29:51.292100] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.248 [2024-07-25 05:29:51.292109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.248 [2024-07-25 05:29:51.292120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.248 [2024-07-25 05:29:51.292146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.248 [2024-07-25 05:29:51.292158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.182 [2024-07-25 05:29:52.292288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.182 [2024-07-25 05:29:52.292356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab2240 with addr=10.0.0.2, port=4420 00:23:48.182 [2024-07-25 05:29:52.292373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab2240 is same with the state(5) to be set 00:23:48.182 [2024-07-25 05:29:52.292399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab2240 (9): Bad file descriptor 00:23:48.182 [2024-07-25 05:29:52.292418] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.182 [2024-07-25 05:29:52.292427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.182 [2024-07-25 05:29:52.292438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.182 [2024-07-25 05:29:52.292466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.182 [2024-07-25 05:29:52.292478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.115 05:29:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:49.373 [2024-07-25 05:29:53.296237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.373 [2024-07-25 05:29:53.296297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab2240 with addr=10.0.0.2, port=4420 00:23:49.373 [2024-07-25 05:29:53.296314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab2240 is same with the state(5) to be set 00:23:49.373 [2024-07-25 05:29:53.296572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab2240 (9): Bad file descriptor 00:23:49.373 [2024-07-25 05:29:53.296832] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.373 [2024-07-25 05:29:53.296851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.373 [2024-07-25 05:29:53.296864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.373 [2024-07-25 05:29:53.300845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.373 [2024-07-25 05:29:53.300880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.630 [2024-07-25 05:29:53.563917] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.630 05:29:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 95677 00:23:50.195 [2024-07-25 05:29:54.342431] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:55.457 00:23:55.457 Latency(us) 00:23:55.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.457 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:55.457 Verification LBA range: start 0x0 length 0x4000 00:23:55.457 NVMe0n1 : 10.01 5233.93 20.45 3547.29 0.00 14542.17 681.43 3035150.89 00:23:55.457 =================================================================================================================== 00:23:55.457 Total : 5233.93 20.45 3547.29 0.00 14542.17 0.00 3035150.89 00:23:55.457 0 00:23:55.457 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 95511 00:23:55.457 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 95511 ']' 00:23:55.457 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 95511 00:23:55.457 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:55.457 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:55.457 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95511 00:23:55.457 killing process with pid 95511 00:23:55.457 Received shutdown signal, test time was about 10.000000 seconds 00:23:55.457 00:23:55.457 Latency(us) 00:23:55.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.457 =================================================================================================================== 00:23:55.457 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:55.457 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:55.457 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:55.457 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95511' 00:23:55.457 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 95511 00:23:55.457 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 95511 00:23:55.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.457 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=95802 00:23:55.458 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:55.458 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 95802 /var/tmp/bdevperf.sock 00:23:55.458 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 95802 ']' 00:23:55.458 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.458 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:55.458 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.458 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:55.458 05:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:55.458 [2024-07-25 05:29:59.391753] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:23:55.458 [2024-07-25 05:29:59.392263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95802 ] 00:23:55.458 [2024-07-25 05:29:59.534456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.458 [2024-07-25 05:29:59.591686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.391 05:30:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:56.391 05:30:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:56.391 05:30:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=95827 00:23:56.391 05:30:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95802 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:56.391 05:30:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:56.649 05:30:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:56.907 NVMe0n1 00:23:56.907 05:30:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=95886 00:23:56.907 05:30:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:56.907 05:30:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:56.907 Running I/O for 10 seconds... 00:23:57.843 05:30:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.104 [2024-07-25 05:30:02.227926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.104 [2024-07-25 05:30:02.227988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.104 [2024-07-25 05:30:02.228000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.104 [2024-07-25 05:30:02.228009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.104 [2024-07-25 05:30:02.228018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.104 [2024-07-25 05:30:02.228026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.104 [2024-07-25 05:30:02.228035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.104 [2024-07-25 05:30:02.228044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.104 [2024-07-25 05:30:02.228052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.105 [2024-07-25 05:30:02.228732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.228992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.229001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.229009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.229019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.229027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.229035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a2c80 is same with the state(5) to be set 00:23:58.106 [2024-07-25 05:30:02.229202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.106 [2024-07-25 05:30:02.229688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.106 [2024-07-25 05:30:02.229697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.229708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.229718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.229729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.229738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.229749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.229758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.229769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.229778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.229790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.229799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.229810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.229819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.229831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.229840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.229851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.229860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.229871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.229880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.229891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.229901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.229912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.229921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.229932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.229941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.229965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.229978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.229989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.229998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.107 [2024-07-25 05:30:02.230504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.107 [2024-07-25 05:30:02.230513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.230989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.230998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.231010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.231019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.231031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.231040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.231051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.231060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.231071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.231080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.231091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.231100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.231112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.231121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.231132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.231141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.231153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.231162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.231173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.231182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.231203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.231213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.231225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.231235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.231252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.231264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.231275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.231286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.231298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.231307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.231318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.231327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.231339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.231348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.108 [2024-07-25 05:30:02.231359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.108 [2024-07-25 05:30:02.231368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.109 [2024-07-25 05:30:02.231880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e8d0 is same with the state(5) to be set 00:23:58.109 [2024-07-25 05:30:02.231902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.109 [2024-07-25 05:30:02.231910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.109 [2024-07-25 05:30:02.231920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115816 len:8 PRP1 0x0 PRP2 0x0 00:23:58.109 [2024-07-25 05:30:02.231929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.109 [2024-07-25 05:30:02.231983] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x180e8d0 was disconnected and freed. reset controller. 00:23:58.109 [2024-07-25 05:30:02.232247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.109 [2024-07-25 05:30:02.232326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a1240 (9): Bad file descriptor 00:23:58.109 [2024-07-25 05:30:02.232434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.109 [2024-07-25 05:30:02.232455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a1240 with addr=10.0.0.2, port=4420 00:23:58.109 [2024-07-25 05:30:02.232465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1240 is same with the state(5) to be set 00:23:58.109 [2024-07-25 05:30:02.232483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a1240 (9): Bad file descriptor 00:23:58.109 [2024-07-25 05:30:02.232499] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.109 [2024-07-25 05:30:02.232509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.110 [2024-07-25 05:30:02.232519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.110 [2024-07-25 05:30:02.232540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.110 [2024-07-25 05:30:02.232551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.110 05:30:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 95886 00:24:00.638 [2024-07-25 05:30:04.232812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-07-25 05:30:04.232887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a1240 with addr=10.0.0.2, port=4420 00:24:00.638 [2024-07-25 05:30:04.232905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1240 is same with the state(5) to be set 00:24:00.638 [2024-07-25 05:30:04.232935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a1240 (9): Bad file descriptor 00:24:00.638 [2024-07-25 05:30:04.232969] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.638 [2024-07-25 05:30:04.232981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.638 [2024-07-25 05:30:04.232992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.638 [2024-07-25 05:30:04.233020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.638 [2024-07-25 05:30:04.233031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.539 [2024-07-25 05:30:06.233257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.539 [2024-07-25 05:30:06.233321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a1240 with addr=10.0.0.2, port=4420 00:24:02.539 [2024-07-25 05:30:06.233338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1240 is same with the state(5) to be set 00:24:02.539 [2024-07-25 05:30:06.233364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a1240 (9): Bad file descriptor 00:24:02.539 [2024-07-25 05:30:06.233383] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.539 [2024-07-25 05:30:06.233393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.539 [2024-07-25 05:30:06.233404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.539 [2024-07-25 05:30:06.233432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.539 [2024-07-25 05:30:06.233443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:04.513 [2024-07-25 05:30:08.233584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:04.513 [2024-07-25 05:30:08.233640] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:04.513 [2024-07-25 05:30:08.233653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:04.513 [2024-07-25 05:30:08.233664] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:04.513 [2024-07-25 05:30:08.233691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.082 00:24:05.082 Latency(us) 00:24:05.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.082 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:05.082 NVMe0n1 : 8.17 2408.39 9.41 15.67 0.00 52750.99 2472.49 7015926.69 00:24:05.082 =================================================================================================================== 00:24:05.082 Total : 2408.39 9.41 15.67 0.00 52750.99 2472.49 7015926.69 00:24:05.082 0 00:24:05.082 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:05.082 Attaching 5 probes... 00:24:05.082 1358.901865: reset bdev controller NVMe0 00:24:05.082 1359.027703: reconnect bdev controller NVMe0 00:24:05.082 3359.328109: reconnect delay bdev controller NVMe0 00:24:05.082 3359.352364: reconnect bdev controller NVMe0 00:24:05.082 5359.775887: reconnect delay bdev controller NVMe0 00:24:05.082 5359.799644: reconnect bdev controller NVMe0 00:24:05.082 7360.217157: reconnect delay bdev controller NVMe0 00:24:05.082 7360.237904: reconnect bdev controller NVMe0 00:24:05.082 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:05.341 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:05.341 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 95827 00:24:05.341 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:05.341 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 95802 00:24:05.341 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 95802 ']' 00:24:05.341 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 95802 00:24:05.341 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:24:05.341 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.341 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95802 00:24:05.341 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:05.341 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:05.341 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95802' 00:24:05.341 killing process with pid 95802 00:24:05.341 Received shutdown signal, test time was about 8.222551 seconds 00:24:05.341 00:24:05.341 Latency(us) 00:24:05.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.341 =================================================================================================================== 00:24:05.341 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.341 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 95802 00:24:05.341 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 95802 00:24:05.341 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.599 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:05.599 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:24:05.599 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:05.599 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:24:05.599 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:05.599 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:24:05.599 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:05.857 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:05.857 rmmod nvme_tcp 00:24:05.857 rmmod nvme_fabrics 00:24:05.857 rmmod nvme_keyring 00:24:05.857 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:05.857 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:24:05.857 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:24:05.857 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 95246 ']' 00:24:05.857 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 95246 00:24:05.857 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 95246 ']' 00:24:05.857 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 95246 00:24:05.857 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:24:05.857 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.857 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95246 00:24:05.857 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:05.857 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:05.857 killing process with pid 95246 00:24:05.857 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95246' 00:24:05.857 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 95246 00:24:05.857 05:30:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 95246 00:24:05.857 05:30:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:05.857 05:30:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:05.857 05:30:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:05.857 05:30:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:05.857 05:30:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:05.857 05:30:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.857 05:30:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.857 05:30:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.116 05:30:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:06.116 ************************************ 00:24:06.116 END TEST nvmf_timeout 00:24:06.116 ************************************ 00:24:06.116 00:24:06.116 real 0m45.984s 00:24:06.116 user 2m16.691s 00:24:06.116 sys 0m4.486s 00:24:06.116 05:30:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:06.116 05:30:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:06.116 05:30:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:24:06.116 05:30:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:06.116 00:24:06.116 real 5m33.278s 00:24:06.116 user 14m32.256s 00:24:06.116 sys 0m58.845s 00:24:06.116 05:30:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:06.116 ************************************ 00:24:06.116 END TEST nvmf_host 00:24:06.116 ************************************ 00:24:06.116 05:30:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.116 00:24:06.116 real 15m36.222s 00:24:06.116 user 41m53.906s 00:24:06.116 sys 3m13.186s 00:24:06.116 05:30:10 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:06.116 05:30:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:06.116 ************************************ 00:24:06.116 END TEST nvmf_tcp 00:24:06.116 ************************************ 00:24:06.116 05:30:10 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:24:06.116 05:30:10 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:06.116 05:30:10 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:06.116 05:30:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:06.116 05:30:10 -- common/autotest_common.sh@10 -- # set +x 00:24:06.116 ************************************ 00:24:06.116 START TEST spdkcli_nvmf_tcp 00:24:06.116 ************************************ 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:06.116 * Looking for test storage... 00:24:06.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.116 05:30:10 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:06.117 05:30:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:06.375 05:30:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:06.375 05:30:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=96099 00:24:06.375 05:30:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 96099 00:24:06.375 05:30:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 96099 ']' 00:24:06.375 05:30:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:06.375 05:30:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.375 05:30:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:06.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.375 05:30:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.375 05:30:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:06.375 05:30:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:06.375 [2024-07-25 05:30:10.351336] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:24:06.375 [2024-07-25 05:30:10.351445] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96099 ] 00:24:06.375 [2024-07-25 05:30:10.488275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:06.632 [2024-07-25 05:30:10.557781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.632 [2024-07-25 05:30:10.557797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.632 05:30:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:06.632 05:30:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:24:06.632 05:30:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:06.632 05:30:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:06.632 05:30:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:06.633 05:30:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:06.633 05:30:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:06.633 05:30:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:06.633 05:30:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:06.633 05:30:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:06.633 05:30:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:06.633 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:06.633 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:06.633 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:06.633 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:06.633 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:06.633 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:06.633 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:06.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:06.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:06.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:06.633 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:06.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:06.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:06.633 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:06.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:06.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:06.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:06.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:06.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:06.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:06.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:06.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:06.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:06.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:06.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:06.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:06.633 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:06.633 ' 00:24:09.163 [2024-07-25 05:30:13.290926] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.576 [2024-07-25 05:30:14.563988] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:24:13.106 [2024-07-25 05:30:16.925615] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:24:15.008 [2024-07-25 05:30:18.966974] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:24:16.383 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:16.383 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:16.383 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:16.383 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:16.383 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:16.383 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:16.383 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:16.383 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:16.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:16.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:16.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:16.383 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:16.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:16.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:16.383 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:16.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:16.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:16.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:16.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:16.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:16.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:16.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:16.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:16.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:24:16.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:16.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:16.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:16.383 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:16.641 05:30:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:16.641 05:30:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:16.641 05:30:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:16.641 05:30:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:16.641 05:30:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:16.641 05:30:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:16.642 05:30:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:24:16.642 05:30:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:24:17.207 05:30:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:17.208 05:30:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:17.208 05:30:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:17.208 05:30:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:17.208 05:30:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:17.208 05:30:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:17.208 05:30:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:17.208 05:30:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:17.208 05:30:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:17.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:17.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:17.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:17.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:24:17.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:24:17.208 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:17.208 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:17.208 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:17.208 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:17.208 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:17.208 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:17.208 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:17.208 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:17.208 ' 00:24:22.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:24:22.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:24:22.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:22.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:24:22.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:24:22.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:24:22.474 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:24:22.474 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:22.474 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:24:22.474 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:24:22.474 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:24:22.474 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:24:22.474 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:24:22.474 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:24:22.474 05:30:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:24:22.474 05:30:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:22.474 05:30:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:22.474 05:30:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 96099 00:24:22.474 05:30:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 96099 ']' 00:24:22.474 05:30:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 96099 00:24:22.474 05:30:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96099 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:22.733 killing process with pid 96099 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96099' 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 96099 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 96099 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 96099 ']' 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 96099 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 96099 ']' 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 96099 00:24:22.733 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (96099) - No such process 00:24:22.733 Process with pid 96099 is not found 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 96099 is not found' 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:24:22.733 00:24:22.733 real 0m16.650s 00:24:22.733 user 0m36.071s 00:24:22.733 sys 0m0.801s 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:22.733 05:30:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:22.733 ************************************ 00:24:22.733 END TEST spdkcli_nvmf_tcp 00:24:22.733 ************************************ 00:24:22.733 05:30:26 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:22.733 05:30:26 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:22.733 05:30:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:22.733 05:30:26 -- common/autotest_common.sh@10 -- # set +x 00:24:22.733 ************************************ 00:24:22.733 START TEST nvmf_identify_passthru 00:24:22.733 ************************************ 00:24:22.733 05:30:26 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:22.992 * Looking for test storage... 00:24:22.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:22.992 05:30:26 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.992 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:22.992 05:30:26 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.992 05:30:26 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.992 05:30:26 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.993 05:30:26 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.993 05:30:26 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.993 05:30:26 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.993 05:30:26 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:24:22.993 05:30:26 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:22.993 05:30:26 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:22.993 05:30:26 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.993 05:30:26 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.993 05:30:26 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.993 05:30:26 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.993 05:30:26 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.993 05:30:26 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.993 05:30:26 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:24:22.993 05:30:26 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.993 05:30:26 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.993 05:30:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:22.993 05:30:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:22.993 05:30:26 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:22.993 Cannot find device "nvmf_tgt_br" 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:22.993 Cannot find device "nvmf_tgt_br2" 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:22.993 Cannot find device "nvmf_tgt_br" 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:22.993 Cannot find device "nvmf_tgt_br2" 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:22.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:22.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:22.993 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:23.251 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:23.251 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:23.251 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:23.251 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:23.251 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:23.251 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:23.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:24:23.252 00:24:23.252 --- 10.0.0.2 ping statistics --- 00:24:23.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.252 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:23.252 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:23.252 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:24:23.252 00:24:23.252 --- 10.0.0.3 ping statistics --- 00:24:23.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.252 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:23.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:24:23.252 00:24:23.252 --- 10.0.0.1 ping statistics --- 00:24:23.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.252 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:23.252 05:30:27 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:23.252 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:24:23.252 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:23.252 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:23.252 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:24:23.252 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:24:23.252 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:24:23.252 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:24:23.252 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:24:23.252 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:24:23.252 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:24:23.252 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:23.252 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:23.252 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:24:23.252 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:24:23.252 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:23.252 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:24:23.252 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:24:23.252 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:24:23.252 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:24:23.252 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:24:23.252 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:24:23.510 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:24:23.510 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:24:23.510 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:24:23.510 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:24:23.769 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:24:23.769 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:24:23.769 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:23.769 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:23.769 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:24:23.769 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:23.769 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:23.769 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=96576 00:24:23.769 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:23.769 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:23.769 05:30:27 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 96576 00:24:23.769 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 96576 ']' 00:24:23.769 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.769 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:23.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.769 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.769 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:23.769 05:30:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:23.769 [2024-07-25 05:30:27.880943] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:24:23.769 [2024-07-25 05:30:27.881042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.029 [2024-07-25 05:30:28.016049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:24.029 [2024-07-25 05:30:28.086346] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.029 [2024-07-25 05:30:28.086418] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.029 [2024-07-25 05:30:28.086434] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.029 [2024-07-25 05:30:28.086444] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.029 [2024-07-25 05:30:28.086454] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.029 [2024-07-25 05:30:28.086616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.029 [2024-07-25 05:30:28.086728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.029 [2024-07-25 05:30:28.087379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:24.029 [2024-07-25 05:30:28.087398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.029 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:24.029 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:24:24.029 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:24:24.029 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.029 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:24.029 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.029 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:24:24.029 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.029 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:24.029 [2024-07-25 05:30:28.195859] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:24:24.029 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.029 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:24.029 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.029 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:24.288 [2024-07-25 05:30:28.209351] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.288 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:24.288 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:24.288 Nvme0n1 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.288 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.288 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.288 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:24.288 [2024-07-25 05:30:28.348929] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.288 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:24.288 [ 00:24:24.288 { 00:24:24.288 "allow_any_host": true, 00:24:24.288 "hosts": [], 00:24:24.288 "listen_addresses": [], 00:24:24.288 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:24.288 "subtype": "Discovery" 00:24:24.288 }, 00:24:24.288 { 00:24:24.288 "allow_any_host": true, 00:24:24.288 "hosts": [], 00:24:24.288 "listen_addresses": [ 00:24:24.288 { 00:24:24.288 "adrfam": "IPv4", 00:24:24.288 "traddr": "10.0.0.2", 00:24:24.288 "trsvcid": "4420", 00:24:24.288 "trtype": "TCP" 00:24:24.288 } 00:24:24.288 ], 00:24:24.288 "max_cntlid": 65519, 00:24:24.288 "max_namespaces": 1, 00:24:24.288 "min_cntlid": 1, 00:24:24.288 "model_number": "SPDK bdev Controller", 00:24:24.288 "namespaces": [ 00:24:24.288 { 00:24:24.288 "bdev_name": "Nvme0n1", 00:24:24.288 "name": "Nvme0n1", 00:24:24.288 "nguid": "45694477B76646249BD198C9458BF867", 00:24:24.288 "nsid": 1, 00:24:24.288 "uuid": "45694477-b766-4624-9bd1-98c9458bf867" 00:24:24.288 } 00:24:24.288 ], 00:24:24.288 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.288 "serial_number": "SPDK00000000000001", 00:24:24.288 "subtype": "NVMe" 00:24:24.288 } 00:24:24.288 ] 00:24:24.288 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.288 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:24.288 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:24:24.288 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:24:24.547 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:24:24.547 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:24.547 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:24:24.547 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:24:24.805 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:24:24.805 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:24:24.805 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:24:24.806 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:24.806 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.806 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:24.806 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.806 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:24:24.806 05:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:24:24.806 05:30:28 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:24.806 05:30:28 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:24:24.806 05:30:28 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:24.806 05:30:28 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:24:24.806 05:30:28 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:24.806 05:30:28 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:24.806 rmmod nvme_tcp 00:24:24.806 rmmod nvme_fabrics 00:24:24.806 rmmod nvme_keyring 00:24:24.806 05:30:28 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:24.806 05:30:28 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:24:24.806 05:30:28 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:24:24.806 05:30:28 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 96576 ']' 00:24:24.806 05:30:28 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 96576 00:24:24.806 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 96576 ']' 00:24:24.806 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 96576 00:24:24.806 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:24:24.806 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:24.806 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96576 00:24:24.806 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:24.806 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:24.806 killing process with pid 96576 00:24:24.806 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96576' 00:24:24.806 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 96576 00:24:24.806 05:30:28 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 96576 00:24:25.064 05:30:29 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:25.064 05:30:29 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:25.064 05:30:29 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:25.064 05:30:29 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:25.064 05:30:29 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:25.064 05:30:29 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.064 05:30:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:25.064 05:30:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.064 05:30:29 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:25.064 00:24:25.064 real 0m2.232s 00:24:25.064 user 0m4.263s 00:24:25.064 sys 0m0.667s 00:24:25.064 05:30:29 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:25.064 05:30:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:25.064 ************************************ 00:24:25.064 END TEST nvmf_identify_passthru 00:24:25.064 ************************************ 00:24:25.064 05:30:29 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:25.065 05:30:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:25.065 05:30:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:25.065 05:30:29 -- common/autotest_common.sh@10 -- # set +x 00:24:25.065 ************************************ 00:24:25.065 START TEST nvmf_dif 00:24:25.065 ************************************ 00:24:25.065 05:30:29 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:25.065 * Looking for test storage... 00:24:25.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:25.065 05:30:29 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:25.065 05:30:29 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:24:25.065 05:30:29 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.065 05:30:29 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.065 05:30:29 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.065 05:30:29 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.065 05:30:29 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.065 05:30:29 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.065 05:30:29 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.065 05:30:29 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.065 05:30:29 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.065 05:30:29 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.323 05:30:29 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:24:25.323 05:30:29 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:24:25.323 05:30:29 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.323 05:30:29 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.323 05:30:29 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:25.323 05:30:29 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.323 05:30:29 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:25.323 05:30:29 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.323 05:30:29 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.323 05:30:29 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.323 05:30:29 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.323 05:30:29 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.324 05:30:29 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.324 05:30:29 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:24:25.324 05:30:29 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.324 05:30:29 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:24:25.324 05:30:29 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:24:25.324 05:30:29 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:24:25.324 05:30:29 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:24:25.324 05:30:29 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.324 05:30:29 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:25.324 05:30:29 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:25.324 Cannot find device "nvmf_tgt_br" 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@155 -- # true 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:25.324 Cannot find device "nvmf_tgt_br2" 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@156 -- # true 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:25.324 Cannot find device "nvmf_tgt_br" 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@158 -- # true 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:25.324 Cannot find device "nvmf_tgt_br2" 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@159 -- # true 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:25.324 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@162 -- # true 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:25.324 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@163 -- # true 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:25.324 05:30:29 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:25.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:24:25.628 00:24:25.628 --- 10.0.0.2 ping statistics --- 00:24:25.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.628 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:25.628 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:25.628 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:24:25.628 00:24:25.628 --- 10.0.0.3 ping statistics --- 00:24:25.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.628 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:25.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:24:25.628 00:24:25.628 --- 10.0.0.1 ping statistics --- 00:24:25.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.628 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:24:25.628 05:30:29 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:25.886 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:25.886 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:25.886 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:25.886 05:30:29 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.886 05:30:29 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:25.886 05:30:29 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:25.886 05:30:29 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.886 05:30:29 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:25.886 05:30:29 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:25.886 05:30:29 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:24:25.886 05:30:29 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:24:25.886 05:30:29 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:25.886 05:30:29 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:25.886 05:30:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:25.886 05:30:30 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=96907 00:24:25.886 05:30:30 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:25.886 05:30:30 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 96907 00:24:25.886 05:30:30 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 96907 ']' 00:24:25.886 05:30:30 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.886 05:30:30 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:25.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.887 05:30:30 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.887 05:30:30 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:25.887 05:30:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:26.145 [2024-07-25 05:30:30.070827] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:24:26.145 [2024-07-25 05:30:30.070936] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.145 [2024-07-25 05:30:30.214610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.145 [2024-07-25 05:30:30.273606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.145 [2024-07-25 05:30:30.273656] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.145 [2024-07-25 05:30:30.273668] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.145 [2024-07-25 05:30:30.273676] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.145 [2024-07-25 05:30:30.273683] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.145 [2024-07-25 05:30:30.273710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.080 05:30:31 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:27.080 05:30:31 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:24:27.080 05:30:31 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:27.080 05:30:31 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:27.080 05:30:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:27.080 05:30:31 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.080 05:30:31 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:24:27.080 05:30:31 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:24:27.080 05:30:31 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.080 05:30:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:27.080 [2024-07-25 05:30:31.130466] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.080 05:30:31 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.080 05:30:31 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:24:27.080 05:30:31 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:27.080 05:30:31 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:27.080 05:30:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:27.080 ************************************ 00:24:27.080 START TEST fio_dif_1_default 00:24:27.080 ************************************ 00:24:27.080 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:24:27.080 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:24:27.080 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:24:27.080 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:24:27.080 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:24:27.080 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:27.081 bdev_null0 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:27.081 [2024-07-25 05:30:31.190576] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:27.081 { 00:24:27.081 "params": { 00:24:27.081 "name": "Nvme$subsystem", 00:24:27.081 "trtype": "$TEST_TRANSPORT", 00:24:27.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:27.081 "adrfam": "ipv4", 00:24:27.081 "trsvcid": "$NVMF_PORT", 00:24:27.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:27.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:27.081 "hdgst": ${hdgst:-false}, 00:24:27.081 "ddgst": ${ddgst:-false} 00:24:27.081 }, 00:24:27.081 "method": "bdev_nvme_attach_controller" 00:24:27.081 } 00:24:27.081 EOF 00:24:27.081 )") 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:27.081 "params": { 00:24:27.081 "name": "Nvme0", 00:24:27.081 "trtype": "tcp", 00:24:27.081 "traddr": "10.0.0.2", 00:24:27.081 "adrfam": "ipv4", 00:24:27.081 "trsvcid": "4420", 00:24:27.081 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:27.081 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:27.081 "hdgst": false, 00:24:27.081 "ddgst": false 00:24:27.081 }, 00:24:27.081 "method": "bdev_nvme_attach_controller" 00:24:27.081 }' 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:27.081 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:27.340 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:27.340 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:27.340 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:27.340 05:30:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:27.340 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:27.340 fio-3.35 00:24:27.340 Starting 1 thread 00:24:39.537 00:24:39.537 filename0: (groupid=0, jobs=1): err= 0: pid=96992: Thu Jul 25 05:30:41 2024 00:24:39.537 read: IOPS=2407, BW=9629KiB/s (9861kB/s)(94.0MiB/10001msec) 00:24:39.537 slat (nsec): min=6846, max=43267, avg=8843.16, stdev=2281.89 00:24:39.537 clat (usec): min=440, max=42550, avg=1635.37, stdev=6691.46 00:24:39.537 lat (usec): min=448, max=42560, avg=1644.21, stdev=6691.52 00:24:39.537 clat percentiles (usec): 00:24:39.537 | 1.00th=[ 465], 5.00th=[ 469], 10.00th=[ 474], 20.00th=[ 482], 00:24:39.537 | 30.00th=[ 486], 40.00th=[ 490], 50.00th=[ 494], 60.00th=[ 498], 00:24:39.537 | 70.00th=[ 506], 80.00th=[ 510], 90.00th=[ 523], 95.00th=[ 553], 00:24:39.537 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:24:39.537 | 99.99th=[42730] 00:24:39.537 bw ( KiB/s): min= 2624, max=15808, per=100.00%, avg=9770.11, stdev=3229.11, samples=19 00:24:39.537 iops : min= 656, max= 3952, avg=2442.53, stdev=807.28, samples=19 00:24:39.537 lat (usec) : 500=61.88%, 750=35.28% 00:24:39.537 lat (msec) : 2=0.02%, 10=0.02%, 50=2.81% 00:24:39.537 cpu : usr=89.92%, sys=8.76%, ctx=17, majf=0, minf=9 00:24:39.537 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:39.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:39.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:39.537 issued rwts: total=24076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:39.537 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:39.537 00:24:39.537 Run status group 0 (all jobs): 00:24:39.537 READ: bw=9629KiB/s (9861kB/s), 9629KiB/s-9629KiB/s (9861kB/s-9861kB/s), io=94.0MiB (98.6MB), run=10001-10001msec 00:24:39.537 05:30:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:24:39.537 05:30:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:24:39.537 05:30:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:24:39.537 05:30:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:39.537 05:30:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:39.538 ************************************ 00:24:39.538 END TEST fio_dif_1_default 00:24:39.538 ************************************ 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.538 00:24:39.538 real 0m10.919s 00:24:39.538 user 0m9.604s 00:24:39.538 sys 0m1.080s 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:39.538 05:30:42 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:24:39.538 05:30:42 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:39.538 05:30:42 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:39.538 05:30:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:39.538 ************************************ 00:24:39.538 START TEST fio_dif_1_multi_subsystems 00:24:39.538 ************************************ 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:39.538 bdev_null0 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:39.538 [2024-07-25 05:30:42.144677] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:39.538 bdev_null1 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:39.538 { 00:24:39.538 "params": { 00:24:39.538 "name": "Nvme$subsystem", 00:24:39.538 "trtype": "$TEST_TRANSPORT", 00:24:39.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.538 "adrfam": "ipv4", 00:24:39.538 "trsvcid": "$NVMF_PORT", 00:24:39.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.538 "hdgst": ${hdgst:-false}, 00:24:39.538 "ddgst": ${ddgst:-false} 00:24:39.538 }, 00:24:39.538 "method": "bdev_nvme_attach_controller" 00:24:39.538 } 00:24:39.538 EOF 00:24:39.538 )") 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:39.538 { 00:24:39.538 "params": { 00:24:39.538 "name": "Nvme$subsystem", 00:24:39.538 "trtype": "$TEST_TRANSPORT", 00:24:39.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.538 "adrfam": "ipv4", 00:24:39.538 "trsvcid": "$NVMF_PORT", 00:24:39.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.538 "hdgst": ${hdgst:-false}, 00:24:39.538 "ddgst": ${ddgst:-false} 00:24:39.538 }, 00:24:39.538 "method": "bdev_nvme_attach_controller" 00:24:39.538 } 00:24:39.538 EOF 00:24:39.538 )") 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:24:39.538 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:24:39.539 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:39.539 "params": { 00:24:39.539 "name": "Nvme0", 00:24:39.539 "trtype": "tcp", 00:24:39.539 "traddr": "10.0.0.2", 00:24:39.539 "adrfam": "ipv4", 00:24:39.539 "trsvcid": "4420", 00:24:39.539 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:39.539 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:39.539 "hdgst": false, 00:24:39.539 "ddgst": false 00:24:39.539 }, 00:24:39.539 "method": "bdev_nvme_attach_controller" 00:24:39.539 },{ 00:24:39.539 "params": { 00:24:39.539 "name": "Nvme1", 00:24:39.539 "trtype": "tcp", 00:24:39.539 "traddr": "10.0.0.2", 00:24:39.539 "adrfam": "ipv4", 00:24:39.539 "trsvcid": "4420", 00:24:39.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:39.539 "hdgst": false, 00:24:39.539 "ddgst": false 00:24:39.539 }, 00:24:39.539 "method": "bdev_nvme_attach_controller" 00:24:39.539 }' 00:24:39.539 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:39.539 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:39.539 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:39.539 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:39.539 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:39.539 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:39.539 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:39.539 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:39.539 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:39.539 05:30:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:39.539 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:39.539 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:39.539 fio-3.35 00:24:39.539 Starting 2 threads 00:24:49.515 00:24:49.515 filename0: (groupid=0, jobs=1): err= 0: pid=97151: Thu Jul 25 05:30:53 2024 00:24:49.515 read: IOPS=243, BW=975KiB/s (999kB/s)(9760KiB/10007msec) 00:24:49.515 slat (nsec): min=6497, max=40863, avg=9799.21, stdev=3880.22 00:24:49.515 clat (usec): min=455, max=43160, avg=16373.81, stdev=19805.16 00:24:49.515 lat (usec): min=463, max=43197, avg=16383.61, stdev=19805.25 00:24:49.515 clat percentiles (usec): 00:24:49.515 | 1.00th=[ 461], 5.00th=[ 469], 10.00th=[ 478], 20.00th=[ 490], 00:24:49.515 | 30.00th=[ 498], 40.00th=[ 506], 50.00th=[ 529], 60.00th=[ 1057], 00:24:49.515 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:24:49.515 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:24:49.515 | 99.99th=[43254] 00:24:49.515 bw ( KiB/s): min= 576, max= 1344, per=49.69%, avg=974.40, stdev=195.47, samples=20 00:24:49.515 iops : min= 144, max= 336, avg=243.60, stdev=48.87, samples=20 00:24:49.515 lat (usec) : 500=33.85%, 750=21.56%, 1000=4.43% 00:24:49.515 lat (msec) : 2=1.15%, 50=39.02% 00:24:49.515 cpu : usr=94.73%, sys=4.51%, ctx=126, majf=0, minf=0 00:24:49.515 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:49.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:49.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:49.515 issued rwts: total=2440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:49.515 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:49.515 filename1: (groupid=0, jobs=1): err= 0: pid=97152: Thu Jul 25 05:30:53 2024 00:24:49.515 read: IOPS=246, BW=985KiB/s (1008kB/s)(9856KiB/10008msec) 00:24:49.515 slat (nsec): min=6580, max=46472, avg=9752.74, stdev=3870.69 00:24:49.515 clat (usec): min=448, max=42483, avg=16216.66, stdev=19763.27 00:24:49.515 lat (usec): min=456, max=42493, avg=16226.41, stdev=19763.38 00:24:49.515 clat percentiles (usec): 00:24:49.515 | 1.00th=[ 465], 5.00th=[ 474], 10.00th=[ 482], 20.00th=[ 490], 00:24:49.515 | 30.00th=[ 498], 40.00th=[ 510], 50.00th=[ 529], 60.00th=[ 906], 00:24:49.515 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:24:49.515 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:24:49.515 | 99.99th=[42730] 00:24:49.515 bw ( KiB/s): min= 576, max= 1440, per=50.15%, avg=984.00, stdev=209.93, samples=20 00:24:49.515 iops : min= 144, max= 360, avg=246.00, stdev=52.48, samples=20 00:24:49.515 lat (usec) : 500=30.76%, 750=24.92%, 1000=4.71% 00:24:49.515 lat (msec) : 2=0.97%, 50=38.64% 00:24:49.515 cpu : usr=94.79%, sys=4.79%, ctx=29, majf=0, minf=9 00:24:49.515 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:49.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:49.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:49.515 issued rwts: total=2464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:49.515 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:49.515 00:24:49.515 Run status group 0 (all jobs): 00:24:49.515 READ: bw=1960KiB/s (2007kB/s), 975KiB/s-985KiB/s (999kB/s-1008kB/s), io=19.2MiB (20.1MB), run=10007-10008msec 00:24:49.515 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:24:49.515 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:49.516 ************************************ 00:24:49.516 END TEST fio_dif_1_multi_subsystems 00:24:49.516 ************************************ 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.516 00:24:49.516 real 0m11.100s 00:24:49.516 user 0m19.717s 00:24:49.516 sys 0m1.166s 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:49.516 05:30:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:49.516 05:30:53 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:24:49.516 05:30:53 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:49.516 05:30:53 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:49.516 05:30:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:49.516 ************************************ 00:24:49.516 START TEST fio_dif_rand_params 00:24:49.516 ************************************ 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:49.516 bdev_null0 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:49.516 [2024-07-25 05:30:53.296491] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.516 { 00:24:49.516 "params": { 00:24:49.516 "name": "Nvme$subsystem", 00:24:49.516 "trtype": "$TEST_TRANSPORT", 00:24:49.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.516 "adrfam": "ipv4", 00:24:49.516 "trsvcid": "$NVMF_PORT", 00:24:49.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.516 "hdgst": ${hdgst:-false}, 00:24:49.516 "ddgst": ${ddgst:-false} 00:24:49.516 }, 00:24:49.516 "method": "bdev_nvme_attach_controller" 00:24:49.516 } 00:24:49.516 EOF 00:24:49.516 )") 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:49.516 "params": { 00:24:49.516 "name": "Nvme0", 00:24:49.516 "trtype": "tcp", 00:24:49.516 "traddr": "10.0.0.2", 00:24:49.516 "adrfam": "ipv4", 00:24:49.516 "trsvcid": "4420", 00:24:49.516 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:49.516 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:49.516 "hdgst": false, 00:24:49.516 "ddgst": false 00:24:49.516 }, 00:24:49.516 "method": "bdev_nvme_attach_controller" 00:24:49.516 }' 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:49.516 05:30:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:49.516 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:49.516 ... 00:24:49.516 fio-3.35 00:24:49.516 Starting 3 threads 00:24:56.079 00:24:56.079 filename0: (groupid=0, jobs=1): err= 0: pid=97302: Thu Jul 25 05:30:58 2024 00:24:56.079 read: IOPS=242, BW=30.3MiB/s (31.7MB/s)(152MiB/5006msec) 00:24:56.079 slat (nsec): min=4755, max=42049, avg=12281.15, stdev=4123.45 00:24:56.079 clat (usec): min=6264, max=54070, avg=12368.99, stdev=3210.97 00:24:56.079 lat (usec): min=6277, max=54081, avg=12381.27, stdev=3210.92 00:24:56.079 clat percentiles (usec): 00:24:56.079 | 1.00th=[ 6718], 5.00th=[ 8029], 10.00th=[10421], 20.00th=[11469], 00:24:56.079 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:24:56.079 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[14091], 00:24:56.079 | 99.00th=[15139], 99.50th=[21103], 99.90th=[54264], 99.95th=[54264], 00:24:56.079 | 99.99th=[54264] 00:24:56.079 bw ( KiB/s): min=26112, max=33792, per=34.94%, avg=30950.40, stdev=2375.43, samples=10 00:24:56.079 iops : min= 204, max= 264, avg=241.80, stdev=18.56, samples=10 00:24:56.079 lat (msec) : 10=8.83%, 20=90.51%, 50=0.33%, 100=0.33% 00:24:56.079 cpu : usr=92.69%, sys=5.81%, ctx=38, majf=0, minf=0 00:24:56.079 IO depths : 1=8.2%, 2=91.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:56.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.079 issued rwts: total=1212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.079 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:56.079 filename0: (groupid=0, jobs=1): err= 0: pid=97303: Thu Jul 25 05:30:58 2024 00:24:56.079 read: IOPS=200, BW=25.0MiB/s (26.2MB/s)(125MiB/5005msec) 00:24:56.079 slat (nsec): min=4946, max=40970, avg=11131.39, stdev=4701.07 00:24:56.079 clat (usec): min=4392, max=23304, avg=14955.90, stdev=2398.54 00:24:56.079 lat (usec): min=4400, max=23317, avg=14967.03, stdev=2398.33 00:24:56.079 clat percentiles (usec): 00:24:56.079 | 1.00th=[ 4424], 5.00th=[ 9634], 10.00th=[11338], 20.00th=[14615], 00:24:56.079 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15533], 60.00th=[15664], 00:24:56.079 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16712], 95.00th=[16909], 00:24:56.079 | 99.00th=[17695], 99.50th=[18220], 99.90th=[23200], 99.95th=[23200], 00:24:56.079 | 99.99th=[23200] 00:24:56.079 bw ( KiB/s): min=23808, max=31488, per=28.88%, avg=25579.40, stdev=2232.71, samples=10 00:24:56.079 iops : min= 186, max= 246, avg=199.80, stdev=17.45, samples=10 00:24:56.079 lat (msec) : 10=6.29%, 20=93.41%, 50=0.30% 00:24:56.079 cpu : usr=92.77%, sys=5.86%, ctx=7, majf=0, minf=0 00:24:56.079 IO depths : 1=31.8%, 2=68.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:56.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.079 issued rwts: total=1002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.079 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:56.079 filename0: (groupid=0, jobs=1): err= 0: pid=97304: Thu Jul 25 05:30:58 2024 00:24:56.079 read: IOPS=249, BW=31.2MiB/s (32.7MB/s)(156MiB/5007msec) 00:24:56.079 slat (nsec): min=6991, max=78401, avg=13869.81, stdev=5291.41 00:24:56.079 clat (usec): min=6624, max=54141, avg=11986.87, stdev=5356.67 00:24:56.079 lat (usec): min=6635, max=54157, avg=12000.74, stdev=5356.70 00:24:56.079 clat percentiles (usec): 00:24:56.079 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10552], 00:24:56.079 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:24:56.079 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12780], 00:24:56.079 | 99.00th=[51643], 99.50th=[52691], 99.90th=[54264], 99.95th=[54264], 00:24:56.079 | 99.99th=[54264] 00:24:56.079 bw ( KiB/s): min=25344, max=35328, per=36.07%, avg=31948.80, stdev=3282.40, samples=10 00:24:56.079 iops : min= 198, max= 276, avg=249.60, stdev=25.64, samples=10 00:24:56.079 lat (msec) : 10=5.92%, 20=92.25%, 50=0.16%, 100=1.68% 00:24:56.079 cpu : usr=92.31%, sys=6.07%, ctx=14, majf=0, minf=0 00:24:56.079 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:56.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.079 issued rwts: total=1251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.079 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:56.079 00:24:56.079 Run status group 0 (all jobs): 00:24:56.079 READ: bw=86.5MiB/s (90.7MB/s), 25.0MiB/s-31.2MiB/s (26.2MB/s-32.7MB/s), io=433MiB (454MB), run=5005-5007msec 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:56.079 bdev_null0 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:56.079 [2024-07-25 05:30:59.205242] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:56.079 bdev_null1 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:56.079 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:56.080 bdev_null2 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.080 { 00:24:56.080 "params": { 00:24:56.080 "name": "Nvme$subsystem", 00:24:56.080 "trtype": "$TEST_TRANSPORT", 00:24:56.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.080 "adrfam": "ipv4", 00:24:56.080 "trsvcid": "$NVMF_PORT", 00:24:56.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.080 "hdgst": ${hdgst:-false}, 00:24:56.080 "ddgst": ${ddgst:-false} 00:24:56.080 }, 00:24:56.080 "method": "bdev_nvme_attach_controller" 00:24:56.080 } 00:24:56.080 EOF 00:24:56.080 )") 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.080 { 00:24:56.080 "params": { 00:24:56.080 "name": "Nvme$subsystem", 00:24:56.080 "trtype": "$TEST_TRANSPORT", 00:24:56.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.080 "adrfam": "ipv4", 00:24:56.080 "trsvcid": "$NVMF_PORT", 00:24:56.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.080 "hdgst": ${hdgst:-false}, 00:24:56.080 "ddgst": ${ddgst:-false} 00:24:56.080 }, 00:24:56.080 "method": "bdev_nvme_attach_controller" 00:24:56.080 } 00:24:56.080 EOF 00:24:56.080 )") 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.080 { 00:24:56.080 "params": { 00:24:56.080 "name": "Nvme$subsystem", 00:24:56.080 "trtype": "$TEST_TRANSPORT", 00:24:56.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.080 "adrfam": "ipv4", 00:24:56.080 "trsvcid": "$NVMF_PORT", 00:24:56.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.080 "hdgst": ${hdgst:-false}, 00:24:56.080 "ddgst": ${ddgst:-false} 00:24:56.080 }, 00:24:56.080 "method": "bdev_nvme_attach_controller" 00:24:56.080 } 00:24:56.080 EOF 00:24:56.080 )") 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:24:56.080 05:30:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:56.081 "params": { 00:24:56.081 "name": "Nvme0", 00:24:56.081 "trtype": "tcp", 00:24:56.081 "traddr": "10.0.0.2", 00:24:56.081 "adrfam": "ipv4", 00:24:56.081 "trsvcid": "4420", 00:24:56.081 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:56.081 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:56.081 "hdgst": false, 00:24:56.081 "ddgst": false 00:24:56.081 }, 00:24:56.081 "method": "bdev_nvme_attach_controller" 00:24:56.081 },{ 00:24:56.081 "params": { 00:24:56.081 "name": "Nvme1", 00:24:56.081 "trtype": "tcp", 00:24:56.081 "traddr": "10.0.0.2", 00:24:56.081 "adrfam": "ipv4", 00:24:56.081 "trsvcid": "4420", 00:24:56.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:56.081 "hdgst": false, 00:24:56.081 "ddgst": false 00:24:56.081 }, 00:24:56.081 "method": "bdev_nvme_attach_controller" 00:24:56.081 },{ 00:24:56.081 "params": { 00:24:56.081 "name": "Nvme2", 00:24:56.081 "trtype": "tcp", 00:24:56.081 "traddr": "10.0.0.2", 00:24:56.081 "adrfam": "ipv4", 00:24:56.081 "trsvcid": "4420", 00:24:56.081 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:56.081 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:56.081 "hdgst": false, 00:24:56.081 "ddgst": false 00:24:56.081 }, 00:24:56.081 "method": "bdev_nvme_attach_controller" 00:24:56.081 }' 00:24:56.081 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:56.081 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:56.081 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:56.081 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:56.081 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:56.081 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:56.081 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:56.081 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:56.081 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:56.081 05:30:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:56.081 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:56.081 ... 00:24:56.081 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:56.081 ... 00:24:56.081 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:56.081 ... 00:24:56.081 fio-3.35 00:24:56.081 Starting 24 threads 00:25:08.279 00:25:08.279 filename0: (groupid=0, jobs=1): err= 0: pid=97405: Thu Jul 25 05:31:10 2024 00:25:08.279 read: IOPS=198, BW=795KiB/s (814kB/s)(7972KiB/10032msec) 00:25:08.279 slat (usec): min=4, max=8021, avg=17.87, stdev=211.62 00:25:08.279 clat (msec): min=33, max=175, avg=80.42, stdev=24.94 00:25:08.279 lat (msec): min=33, max=175, avg=80.43, stdev=24.93 00:25:08.279 clat percentiles (msec): 00:25:08.279 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 58], 00:25:08.279 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 82], 00:25:08.279 | 70.00th=[ 91], 80.00th=[ 103], 90.00th=[ 114], 95.00th=[ 126], 00:25:08.279 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 176], 99.95th=[ 176], 00:25:08.279 | 99.99th=[ 176] 00:25:08.279 bw ( KiB/s): min= 512, max= 1120, per=4.22%, avg=790.65, stdev=156.52, samples=20 00:25:08.280 iops : min= 128, max= 280, avg=197.60, stdev=39.11, samples=20 00:25:08.280 lat (msec) : 50=11.69%, 100=67.08%, 250=21.22% 00:25:08.280 cpu : usr=42.16%, sys=1.32%, ctx=1398, majf=0, minf=9 00:25:08.280 IO depths : 1=1.0%, 2=2.2%, 4=9.8%, 8=74.4%, 16=12.6%, 32=0.0%, >=64=0.0% 00:25:08.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.280 complete : 0=0.0%, 4=89.8%, 8=5.9%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.280 issued rwts: total=1993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.280 filename0: (groupid=0, jobs=1): err= 0: pid=97406: Thu Jul 25 05:31:10 2024 00:25:08.280 read: IOPS=215, BW=861KiB/s (882kB/s)(8664KiB/10061msec) 00:25:08.280 slat (usec): min=4, max=8024, avg=25.44, stdev=344.08 00:25:08.280 clat (usec): min=1688, max=155687, avg=74088.79, stdev=28154.27 00:25:08.280 lat (usec): min=1698, max=155701, avg=74114.23, stdev=28167.07 00:25:08.280 clat percentiles (msec): 00:25:08.280 | 1.00th=[ 3], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 48], 00:25:08.280 | 30.00th=[ 60], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 82], 00:25:08.280 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 121], 00:25:08.280 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 157], 99.95th=[ 157], 00:25:08.280 | 99.99th=[ 157] 00:25:08.280 bw ( KiB/s): min= 584, max= 1464, per=4.59%, avg=859.90, stdev=196.99, samples=20 00:25:08.280 iops : min= 146, max= 366, avg=214.95, stdev=49.25, samples=20 00:25:08.280 lat (msec) : 2=0.74%, 4=0.74%, 10=2.22%, 50=20.08%, 100=60.39% 00:25:08.280 lat (msec) : 250=15.84% 00:25:08.280 cpu : usr=35.00%, sys=0.94%, ctx=881, majf=0, minf=9 00:25:08.280 IO depths : 1=1.5%, 2=3.0%, 4=10.3%, 8=73.2%, 16=11.9%, 32=0.0%, >=64=0.0% 00:25:08.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.280 complete : 0=0.0%, 4=90.1%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.280 issued rwts: total=2166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.280 filename0: (groupid=0, jobs=1): err= 0: pid=97407: Thu Jul 25 05:31:10 2024 00:25:08.280 read: IOPS=215, BW=863KiB/s (884kB/s)(8668KiB/10045msec) 00:25:08.280 slat (usec): min=4, max=8017, avg=16.27, stdev=192.60 00:25:08.280 clat (msec): min=3, max=162, avg=73.99, stdev=25.53 00:25:08.280 lat (msec): min=3, max=162, avg=74.00, stdev=25.53 00:25:08.280 clat percentiles (msec): 00:25:08.280 | 1.00th=[ 10], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 50], 00:25:08.280 | 30.00th=[ 60], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 80], 00:25:08.280 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:25:08.280 | 99.00th=[ 140], 99.50th=[ 153], 99.90th=[ 163], 99.95th=[ 163], 00:25:08.280 | 99.99th=[ 163] 00:25:08.280 bw ( KiB/s): min= 656, max= 1072, per=4.59%, avg=860.30, stdev=135.47, samples=20 00:25:08.280 iops : min= 164, max= 268, avg=215.05, stdev=33.90, samples=20 00:25:08.280 lat (msec) : 4=0.74%, 10=0.74%, 20=0.74%, 50=18.64%, 100=65.99% 00:25:08.280 lat (msec) : 250=13.15% 00:25:08.280 cpu : usr=33.71%, sys=1.13%, ctx=907, majf=0, minf=0 00:25:08.280 IO depths : 1=0.7%, 2=1.5%, 4=8.7%, 8=76.2%, 16=12.8%, 32=0.0%, >=64=0.0% 00:25:08.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.280 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.280 issued rwts: total=2167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.280 filename0: (groupid=0, jobs=1): err= 0: pid=97408: Thu Jul 25 05:31:10 2024 00:25:08.280 read: IOPS=194, BW=778KiB/s (797kB/s)(7796KiB/10022msec) 00:25:08.280 slat (usec): min=5, max=8023, avg=16.86, stdev=202.93 00:25:08.280 clat (msec): min=32, max=187, avg=82.14, stdev=30.69 00:25:08.280 lat (msec): min=32, max=187, avg=82.15, stdev=30.69 00:25:08.280 clat percentiles (msec): 00:25:08.280 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 58], 00:25:08.280 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:25:08.280 | 70.00th=[ 95], 80.00th=[ 107], 90.00th=[ 126], 95.00th=[ 148], 00:25:08.280 | 99.00th=[ 167], 99.50th=[ 184], 99.90th=[ 188], 99.95th=[ 188], 00:25:08.280 | 99.99th=[ 188] 00:25:08.280 bw ( KiB/s): min= 384, max= 1040, per=4.14%, avg=775.45, stdev=203.19, samples=20 00:25:08.280 iops : min= 96, max= 260, avg=193.85, stdev=50.81, samples=20 00:25:08.280 lat (msec) : 50=16.37%, 100=61.62%, 250=22.01% 00:25:08.280 cpu : usr=32.68%, sys=0.83%, ctx=895, majf=0, minf=9 00:25:08.280 IO depths : 1=0.8%, 2=1.8%, 4=7.2%, 8=76.2%, 16=14.0%, 32=0.0%, >=64=0.0% 00:25:08.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.280 complete : 0=0.0%, 4=89.8%, 8=6.8%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.280 issued rwts: total=1949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.280 filename0: (groupid=0, jobs=1): err= 0: pid=97409: Thu Jul 25 05:31:10 2024 00:25:08.280 read: IOPS=180, BW=720KiB/s (737kB/s)(7228KiB/10036msec) 00:25:08.280 slat (usec): min=4, max=4019, avg=15.62, stdev=133.39 00:25:08.280 clat (msec): min=37, max=167, avg=88.69, stdev=28.35 00:25:08.280 lat (msec): min=37, max=167, avg=88.71, stdev=28.35 00:25:08.280 clat percentiles (msec): 00:25:08.280 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 64], 00:25:08.280 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 93], 00:25:08.280 | 70.00th=[ 101], 80.00th=[ 112], 90.00th=[ 132], 95.00th=[ 144], 00:25:08.280 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 167], 99.95th=[ 169], 00:25:08.280 | 99.99th=[ 169] 00:25:08.280 bw ( KiB/s): min= 464, max= 1072, per=3.82%, avg=715.95, stdev=185.47, samples=20 00:25:08.280 iops : min= 116, max= 268, avg=178.90, stdev=46.40, samples=20 00:25:08.280 lat (msec) : 50=8.74%, 100=60.65%, 250=30.60% 00:25:08.280 cpu : usr=37.39%, sys=0.98%, ctx=1034, majf=0, minf=9 00:25:08.280 IO depths : 1=0.8%, 2=1.7%, 4=7.7%, 8=76.1%, 16=13.6%, 32=0.0%, >=64=0.0% 00:25:08.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.280 complete : 0=0.0%, 4=89.4%, 8=6.9%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.280 issued rwts: total=1807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.280 filename0: (groupid=0, jobs=1): err= 0: pid=97410: Thu Jul 25 05:31:10 2024 00:25:08.280 read: IOPS=177, BW=710KiB/s (728kB/s)(7116KiB/10016msec) 00:25:08.280 slat (usec): min=4, max=5026, avg=13.93, stdev=118.98 00:25:08.280 clat (msec): min=45, max=184, avg=89.93, stdev=26.41 00:25:08.280 lat (msec): min=45, max=184, avg=89.95, stdev=26.41 00:25:08.280 clat percentiles (msec): 00:25:08.280 | 1.00th=[ 47], 5.00th=[ 53], 10.00th=[ 62], 20.00th=[ 69], 00:25:08.280 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 94], 00:25:08.280 | 70.00th=[ 104], 80.00th=[ 113], 90.00th=[ 127], 95.00th=[ 144], 00:25:08.280 | 99.00th=[ 161], 99.50th=[ 161], 99.90th=[ 184], 99.95th=[ 184], 00:25:08.280 | 99.99th=[ 184] 00:25:08.280 bw ( KiB/s): min= 512, max= 896, per=3.78%, avg=707.55, stdev=119.75, samples=20 00:25:08.280 iops : min= 128, max= 224, avg=176.85, stdev=29.92, samples=20 00:25:08.280 lat (msec) : 50=4.44%, 100=62.00%, 250=33.56% 00:25:08.280 cpu : usr=44.30%, sys=1.29%, ctx=1437, majf=0, minf=9 00:25:08.280 IO depths : 1=3.4%, 2=7.3%, 4=17.4%, 8=62.7%, 16=9.3%, 32=0.0%, >=64=0.0% 00:25:08.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.280 complete : 0=0.0%, 4=91.9%, 8=2.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.280 issued rwts: total=1779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.280 filename0: (groupid=0, jobs=1): err= 0: pid=97411: Thu Jul 25 05:31:10 2024 00:25:08.280 read: IOPS=218, BW=876KiB/s (897kB/s)(8788KiB/10032msec) 00:25:08.280 slat (usec): min=5, max=8024, avg=29.28, stdev=367.94 00:25:08.280 clat (msec): min=33, max=179, avg=72.82, stdev=25.19 00:25:08.280 lat (msec): min=33, max=179, avg=72.85, stdev=25.20 00:25:08.280 clat percentiles (msec): 00:25:08.280 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 50], 00:25:08.280 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:25:08.280 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 110], 95.00th=[ 121], 00:25:08.280 | 99.00th=[ 148], 99.50th=[ 157], 99.90th=[ 159], 99.95th=[ 180], 00:25:08.280 | 99.99th=[ 180] 00:25:08.280 bw ( KiB/s): min= 616, max= 1168, per=4.67%, avg=874.25, stdev=188.34, samples=20 00:25:08.280 iops : min= 154, max= 292, avg=218.50, stdev=47.12, samples=20 00:25:08.280 lat (msec) : 50=20.98%, 100=65.13%, 250=13.88% 00:25:08.280 cpu : usr=37.29%, sys=1.05%, ctx=1228, majf=0, minf=9 00:25:08.280 IO depths : 1=0.5%, 2=1.0%, 4=7.5%, 8=78.0%, 16=13.1%, 32=0.0%, >=64=0.0% 00:25:08.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.280 complete : 0=0.0%, 4=89.3%, 8=6.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.280 issued rwts: total=2197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.280 filename0: (groupid=0, jobs=1): err= 0: pid=97412: Thu Jul 25 05:31:10 2024 00:25:08.280 read: IOPS=187, BW=751KiB/s (769kB/s)(7540KiB/10035msec) 00:25:08.280 slat (usec): min=5, max=8020, avg=18.28, stdev=212.39 00:25:08.280 clat (msec): min=35, max=184, avg=85.02, stdev=26.21 00:25:08.280 lat (msec): min=35, max=184, avg=85.04, stdev=26.21 00:25:08.280 clat percentiles (msec): 00:25:08.280 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 63], 00:25:08.280 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 85], 00:25:08.280 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 142], 00:25:08.280 | 99.00th=[ 161], 99.50th=[ 161], 99.90th=[ 184], 99.95th=[ 184], 00:25:08.280 | 99.99th=[ 184] 00:25:08.280 bw ( KiB/s): min= 507, max= 992, per=3.99%, avg=748.75, stdev=152.25, samples=20 00:25:08.280 iops : min= 126, max= 248, avg=187.10, stdev=38.10, samples=20 00:25:08.280 lat (msec) : 50=8.59%, 100=63.71%, 250=27.69% 00:25:08.280 cpu : usr=37.18%, sys=1.25%, ctx=1085, majf=0, minf=9 00:25:08.280 IO depths : 1=1.2%, 2=2.9%, 4=11.0%, 8=72.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:25:08.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.280 complete : 0=0.0%, 4=90.2%, 8=5.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.280 issued rwts: total=1885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.281 filename1: (groupid=0, jobs=1): err= 0: pid=97413: Thu Jul 25 05:31:10 2024 00:25:08.281 read: IOPS=177, BW=711KiB/s (728kB/s)(7112KiB/10005msec) 00:25:08.281 slat (usec): min=4, max=8037, avg=19.94, stdev=268.88 00:25:08.281 clat (msec): min=13, max=199, avg=89.85, stdev=30.29 00:25:08.281 lat (msec): min=13, max=199, avg=89.87, stdev=30.30 00:25:08.281 clat percentiles (msec): 00:25:08.281 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 71], 00:25:08.281 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 96], 00:25:08.281 | 70.00th=[ 107], 80.00th=[ 109], 90.00th=[ 123], 95.00th=[ 146], 00:25:08.281 | 99.00th=[ 197], 99.50th=[ 201], 99.90th=[ 201], 99.95th=[ 201], 00:25:08.281 | 99.99th=[ 201] 00:25:08.281 bw ( KiB/s): min= 384, max= 1024, per=3.74%, avg=701.16, stdev=175.76, samples=19 00:25:08.281 iops : min= 96, max= 256, avg=175.26, stdev=43.91, samples=19 00:25:08.281 lat (msec) : 20=0.34%, 50=8.61%, 100=59.06%, 250=32.00% 00:25:08.281 cpu : usr=31.74%, sys=0.80%, ctx=827, majf=0, minf=9 00:25:08.281 IO depths : 1=1.9%, 2=4.3%, 4=13.5%, 8=69.1%, 16=11.2%, 32=0.0%, >=64=0.0% 00:25:08.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.281 complete : 0=0.0%, 4=90.5%, 8=4.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.281 issued rwts: total=1778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.281 filename1: (groupid=0, jobs=1): err= 0: pid=97414: Thu Jul 25 05:31:10 2024 00:25:08.281 read: IOPS=186, BW=745KiB/s (763kB/s)(7472KiB/10023msec) 00:25:08.281 slat (usec): min=4, max=8025, avg=19.32, stdev=262.17 00:25:08.281 clat (msec): min=35, max=179, avg=85.69, stdev=27.46 00:25:08.281 lat (msec): min=35, max=179, avg=85.71, stdev=27.46 00:25:08.281 clat percentiles (msec): 00:25:08.281 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 61], 00:25:08.281 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 86], 00:25:08.281 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 144], 00:25:08.281 | 99.00th=[ 169], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 180], 00:25:08.281 | 99.99th=[ 180] 00:25:08.281 bw ( KiB/s): min= 384, max= 1024, per=3.95%, avg=740.50, stdev=155.17, samples=20 00:25:08.281 iops : min= 96, max= 256, avg=185.10, stdev=38.83, samples=20 00:25:08.281 lat (msec) : 50=9.69%, 100=64.45%, 250=25.86% 00:25:08.281 cpu : usr=31.50%, sys=1.00%, ctx=828, majf=0, minf=9 00:25:08.281 IO depths : 1=1.0%, 2=2.1%, 4=9.1%, 8=75.1%, 16=12.8%, 32=0.0%, >=64=0.0% 00:25:08.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.281 complete : 0=0.0%, 4=89.5%, 8=6.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.281 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.281 filename1: (groupid=0, jobs=1): err= 0: pid=97415: Thu Jul 25 05:31:10 2024 00:25:08.281 read: IOPS=212, BW=852KiB/s (872kB/s)(8548KiB/10033msec) 00:25:08.281 slat (usec): min=4, max=4031, avg=12.78, stdev=87.07 00:25:08.281 clat (msec): min=34, max=172, avg=74.98, stdev=24.93 00:25:08.281 lat (msec): min=34, max=172, avg=75.00, stdev=24.93 00:25:08.281 clat percentiles (msec): 00:25:08.281 | 1.00th=[ 40], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 51], 00:25:08.281 | 30.00th=[ 58], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 75], 00:25:08.281 | 70.00th=[ 83], 80.00th=[ 97], 90.00th=[ 112], 95.00th=[ 120], 00:25:08.281 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 174], 99.95th=[ 174], 00:25:08.281 | 99.99th=[ 174] 00:25:08.281 bw ( KiB/s): min= 507, max= 1072, per=4.52%, avg=847.95, stdev=179.93, samples=20 00:25:08.281 iops : min= 126, max= 268, avg=211.90, stdev=45.09, samples=20 00:25:08.281 lat (msec) : 50=18.25%, 100=63.55%, 250=18.20% 00:25:08.281 cpu : usr=43.39%, sys=1.48%, ctx=1149, majf=0, minf=9 00:25:08.281 IO depths : 1=1.1%, 2=2.3%, 4=8.7%, 8=75.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:25:08.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.281 complete : 0=0.0%, 4=89.8%, 8=5.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.281 issued rwts: total=2137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.281 filename1: (groupid=0, jobs=1): err= 0: pid=97416: Thu Jul 25 05:31:10 2024 00:25:08.281 read: IOPS=209, BW=839KiB/s (859kB/s)(8440KiB/10065msec) 00:25:08.281 slat (usec): min=3, max=10023, avg=27.13, stdev=372.25 00:25:08.281 clat (msec): min=8, max=173, avg=76.07, stdev=23.56 00:25:08.281 lat (msec): min=8, max=173, avg=76.10, stdev=23.56 00:25:08.281 clat percentiles (msec): 00:25:08.281 | 1.00th=[ 10], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 59], 00:25:08.281 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 82], 00:25:08.281 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 115], 00:25:08.281 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 174], 00:25:08.281 | 99.99th=[ 174] 00:25:08.281 bw ( KiB/s): min= 688, max= 1072, per=4.47%, avg=837.60, stdev=105.32, samples=20 00:25:08.281 iops : min= 172, max= 268, avg=209.35, stdev=26.29, samples=20 00:25:08.281 lat (msec) : 10=1.52%, 50=12.65%, 100=72.09%, 250=13.74% 00:25:08.281 cpu : usr=35.73%, sys=1.08%, ctx=1020, majf=0, minf=9 00:25:08.281 IO depths : 1=1.4%, 2=3.0%, 4=10.4%, 8=73.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:25:08.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.281 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.281 issued rwts: total=2110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.281 filename1: (groupid=0, jobs=1): err= 0: pid=97417: Thu Jul 25 05:31:10 2024 00:25:08.281 read: IOPS=168, BW=676KiB/s (692kB/s)(6768KiB/10019msec) 00:25:08.281 slat (usec): min=4, max=8017, avg=15.47, stdev=194.70 00:25:08.281 clat (msec): min=35, max=266, avg=94.62, stdev=30.68 00:25:08.281 lat (msec): min=35, max=266, avg=94.63, stdev=30.68 00:25:08.281 clat percentiles (msec): 00:25:08.281 | 1.00th=[ 39], 5.00th=[ 55], 10.00th=[ 63], 20.00th=[ 72], 00:25:08.281 | 30.00th=[ 74], 40.00th=[ 85], 50.00th=[ 93], 60.00th=[ 96], 00:25:08.281 | 70.00th=[ 107], 80.00th=[ 116], 90.00th=[ 133], 95.00th=[ 155], 00:25:08.281 | 99.00th=[ 184], 99.50th=[ 207], 99.90th=[ 268], 99.95th=[ 268], 00:25:08.281 | 99.99th=[ 268] 00:25:08.281 bw ( KiB/s): min= 384, max= 880, per=3.58%, avg=670.30, stdev=134.38, samples=20 00:25:08.281 iops : min= 96, max= 220, avg=167.55, stdev=33.58, samples=20 00:25:08.281 lat (msec) : 50=4.91%, 100=60.70%, 250=34.10%, 500=0.30% 00:25:08.281 cpu : usr=33.59%, sys=0.95%, ctx=927, majf=0, minf=9 00:25:08.281 IO depths : 1=3.2%, 2=7.2%, 4=18.6%, 8=61.5%, 16=9.5%, 32=0.0%, >=64=0.0% 00:25:08.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.281 complete : 0=0.0%, 4=92.2%, 8=2.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.281 issued rwts: total=1692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.281 filename1: (groupid=0, jobs=1): err= 0: pid=97418: Thu Jul 25 05:31:10 2024 00:25:08.281 read: IOPS=206, BW=826KiB/s (846kB/s)(8280KiB/10028msec) 00:25:08.281 slat (usec): min=5, max=8024, avg=18.44, stdev=248.98 00:25:08.281 clat (msec): min=29, max=188, avg=77.38, stdev=25.50 00:25:08.281 lat (msec): min=29, max=188, avg=77.39, stdev=25.50 00:25:08.281 clat percentiles (msec): 00:25:08.281 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 57], 00:25:08.281 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 82], 00:25:08.281 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 127], 00:25:08.281 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 188], 99.95th=[ 188], 00:25:08.281 | 99.99th=[ 188] 00:25:08.281 bw ( KiB/s): min= 440, max= 1168, per=4.38%, avg=821.05, stdev=175.06, samples=20 00:25:08.281 iops : min= 110, max= 292, avg=205.20, stdev=43.74, samples=20 00:25:08.281 lat (msec) : 50=17.58%, 100=67.49%, 250=14.93% 00:25:08.281 cpu : usr=31.72%, sys=0.80%, ctx=828, majf=0, minf=9 00:25:08.281 IO depths : 1=0.7%, 2=1.5%, 4=7.2%, 8=77.0%, 16=13.6%, 32=0.0%, >=64=0.0% 00:25:08.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.281 complete : 0=0.0%, 4=89.5%, 8=6.8%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.281 issued rwts: total=2070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.281 filename1: (groupid=0, jobs=1): err= 0: pid=97419: Thu Jul 25 05:31:10 2024 00:25:08.281 read: IOPS=217, BW=872KiB/s (893kB/s)(8748KiB/10036msec) 00:25:08.281 slat (usec): min=4, max=8021, avg=13.98, stdev=171.35 00:25:08.281 clat (msec): min=33, max=182, avg=73.26, stdev=22.90 00:25:08.281 lat (msec): min=33, max=182, avg=73.27, stdev=22.90 00:25:08.281 clat percentiles (msec): 00:25:08.281 | 1.00th=[ 40], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 53], 00:25:08.281 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 73], 00:25:08.281 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 118], 00:25:08.281 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 182], 99.95th=[ 182], 00:25:08.281 | 99.99th=[ 182] 00:25:08.281 bw ( KiB/s): min= 512, max= 1248, per=4.64%, avg=868.40, stdev=175.15, samples=20 00:25:08.281 iops : min= 128, max= 312, avg=217.10, stdev=43.79, samples=20 00:25:08.281 lat (msec) : 50=17.06%, 100=69.09%, 250=13.85% 00:25:08.281 cpu : usr=41.74%, sys=1.32%, ctx=1376, majf=0, minf=9 00:25:08.281 IO depths : 1=0.8%, 2=2.0%, 4=9.1%, 8=75.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:25:08.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.281 complete : 0=0.0%, 4=89.8%, 8=5.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.281 issued rwts: total=2187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.281 filename1: (groupid=0, jobs=1): err= 0: pid=97420: Thu Jul 25 05:31:10 2024 00:25:08.281 read: IOPS=211, BW=845KiB/s (865kB/s)(8476KiB/10030msec) 00:25:08.281 slat (nsec): min=3826, max=51627, avg=10889.02, stdev=4057.92 00:25:08.281 clat (msec): min=31, max=173, avg=75.66, stdev=22.78 00:25:08.281 lat (msec): min=31, max=173, avg=75.67, stdev=22.78 00:25:08.281 clat percentiles (msec): 00:25:08.281 | 1.00th=[ 35], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:25:08.281 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 81], 00:25:08.281 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 116], 00:25:08.281 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 174], 99.95th=[ 174], 00:25:08.281 | 99.99th=[ 174] 00:25:08.281 bw ( KiB/s): min= 560, max= 1072, per=4.49%, avg=840.55, stdev=119.87, samples=20 00:25:08.282 iops : min= 140, max= 268, avg=210.10, stdev=29.98, samples=20 00:25:08.282 lat (msec) : 50=15.67%, 100=72.20%, 250=12.13% 00:25:08.282 cpu : usr=35.72%, sys=1.03%, ctx=1087, majf=0, minf=9 00:25:08.282 IO depths : 1=0.4%, 2=0.8%, 4=5.8%, 8=79.0%, 16=14.0%, 32=0.0%, >=64=0.0% 00:25:08.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.282 complete : 0=0.0%, 4=89.1%, 8=7.3%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.282 issued rwts: total=2119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.282 filename2: (groupid=0, jobs=1): err= 0: pid=97421: Thu Jul 25 05:31:10 2024 00:25:08.282 read: IOPS=202, BW=810KiB/s (829kB/s)(8124KiB/10031msec) 00:25:08.282 slat (usec): min=3, max=8034, avg=22.26, stdev=275.98 00:25:08.282 clat (msec): min=33, max=179, avg=78.80, stdev=26.41 00:25:08.282 lat (msec): min=33, max=179, avg=78.82, stdev=26.41 00:25:08.282 clat percentiles (msec): 00:25:08.282 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 55], 00:25:08.282 | 30.00th=[ 59], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 81], 00:25:08.282 | 70.00th=[ 89], 80.00th=[ 104], 90.00th=[ 116], 95.00th=[ 128], 00:25:08.282 | 99.00th=[ 148], 99.50th=[ 161], 99.90th=[ 180], 99.95th=[ 180], 00:25:08.282 | 99.99th=[ 180] 00:25:08.282 bw ( KiB/s): min= 464, max= 1080, per=4.30%, avg=805.85, stdev=198.77, samples=20 00:25:08.282 iops : min= 116, max= 270, avg=201.40, stdev=49.75, samples=20 00:25:08.282 lat (msec) : 50=14.52%, 100=64.25%, 250=21.22% 00:25:08.282 cpu : usr=39.29%, sys=1.13%, ctx=1125, majf=0, minf=9 00:25:08.282 IO depths : 1=0.8%, 2=1.8%, 4=9.7%, 8=74.9%, 16=12.8%, 32=0.0%, >=64=0.0% 00:25:08.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.282 complete : 0=0.0%, 4=89.7%, 8=5.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.282 issued rwts: total=2031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.282 filename2: (groupid=0, jobs=1): err= 0: pid=97422: Thu Jul 25 05:31:10 2024 00:25:08.282 read: IOPS=200, BW=804KiB/s (823kB/s)(8072KiB/10041msec) 00:25:08.282 slat (usec): min=3, max=4018, avg=14.34, stdev=120.11 00:25:08.282 clat (msec): min=34, max=188, avg=79.52, stdev=29.92 00:25:08.282 lat (msec): min=34, max=188, avg=79.53, stdev=29.92 00:25:08.282 clat percentiles (msec): 00:25:08.282 | 1.00th=[ 38], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 53], 00:25:08.282 | 30.00th=[ 59], 40.00th=[ 68], 50.00th=[ 73], 60.00th=[ 81], 00:25:08.282 | 70.00th=[ 93], 80.00th=[ 106], 90.00th=[ 121], 95.00th=[ 142], 00:25:08.282 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 188], 99.95th=[ 188], 00:25:08.282 | 99.99th=[ 188] 00:25:08.282 bw ( KiB/s): min= 384, max= 1088, per=4.27%, avg=800.80, stdev=201.22, samples=20 00:25:08.282 iops : min= 96, max= 272, avg=200.20, stdev=50.30, samples=20 00:25:08.282 lat (msec) : 50=16.30%, 100=61.84%, 250=21.85% 00:25:08.282 cpu : usr=38.40%, sys=1.16%, ctx=1246, majf=0, minf=9 00:25:08.282 IO depths : 1=1.2%, 2=2.7%, 4=10.9%, 8=73.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:25:08.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.282 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.282 issued rwts: total=2018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.282 filename2: (groupid=0, jobs=1): err= 0: pid=97423: Thu Jul 25 05:31:10 2024 00:25:08.282 read: IOPS=171, BW=684KiB/s (701kB/s)(6848KiB/10009msec) 00:25:08.282 slat (usec): min=3, max=4017, avg=13.07, stdev=96.92 00:25:08.282 clat (msec): min=11, max=261, avg=93.45, stdev=28.34 00:25:08.282 lat (msec): min=11, max=261, avg=93.46, stdev=28.35 00:25:08.282 clat percentiles (msec): 00:25:08.282 | 1.00th=[ 40], 5.00th=[ 56], 10.00th=[ 65], 20.00th=[ 72], 00:25:08.282 | 30.00th=[ 77], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 97], 00:25:08.282 | 70.00th=[ 110], 80.00th=[ 115], 90.00th=[ 128], 95.00th=[ 142], 00:25:08.282 | 99.00th=[ 165], 99.50th=[ 199], 99.90th=[ 262], 99.95th=[ 262], 00:25:08.282 | 99.99th=[ 262] 00:25:08.282 bw ( KiB/s): min= 344, max= 896, per=3.58%, avg=671.05, stdev=135.13, samples=19 00:25:08.282 iops : min= 86, max= 224, avg=167.74, stdev=33.76, samples=19 00:25:08.282 lat (msec) : 20=0.93%, 50=3.04%, 100=57.30%, 250=38.43%, 500=0.29% 00:25:08.282 cpu : usr=37.55%, sys=1.24%, ctx=1063, majf=0, minf=9 00:25:08.282 IO depths : 1=3.3%, 2=7.2%, 4=18.6%, 8=61.5%, 16=9.3%, 32=0.0%, >=64=0.0% 00:25:08.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.282 complete : 0=0.0%, 4=92.2%, 8=2.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.282 issued rwts: total=1712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.282 filename2: (groupid=0, jobs=1): err= 0: pid=97424: Thu Jul 25 05:31:10 2024 00:25:08.282 read: IOPS=244, BW=979KiB/s (1003kB/s)(9844KiB/10055msec) 00:25:08.282 slat (usec): min=7, max=5016, avg=17.08, stdev=169.84 00:25:08.282 clat (msec): min=5, max=202, avg=65.21, stdev=23.80 00:25:08.282 lat (msec): min=5, max=202, avg=65.23, stdev=23.80 00:25:08.282 clat percentiles (msec): 00:25:08.282 | 1.00th=[ 11], 5.00th=[ 39], 10.00th=[ 45], 20.00th=[ 47], 00:25:08.282 | 30.00th=[ 51], 40.00th=[ 54], 50.00th=[ 61], 60.00th=[ 68], 00:25:08.282 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 100], 95.00th=[ 115], 00:25:08.282 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 203], 99.95th=[ 203], 00:25:08.282 | 99.99th=[ 203] 00:25:08.282 bw ( KiB/s): min= 600, max= 1248, per=5.22%, avg=978.00, stdev=205.83, samples=20 00:25:08.282 iops : min= 150, max= 312, avg=244.50, stdev=51.46, samples=20 00:25:08.282 lat (msec) : 10=0.65%, 20=0.65%, 50=27.18%, 100=62.25%, 250=9.26% 00:25:08.282 cpu : usr=53.13%, sys=1.43%, ctx=1975, majf=0, minf=9 00:25:08.282 IO depths : 1=0.6%, 2=1.3%, 4=7.6%, 8=77.8%, 16=12.7%, 32=0.0%, >=64=0.0% 00:25:08.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.282 complete : 0=0.0%, 4=89.4%, 8=5.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.282 issued rwts: total=2461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.282 filename2: (groupid=0, jobs=1): err= 0: pid=97425: Thu Jul 25 05:31:10 2024 00:25:08.282 read: IOPS=167, BW=670KiB/s (686kB/s)(6708KiB/10007msec) 00:25:08.282 slat (usec): min=4, max=2029, avg=12.24, stdev=49.45 00:25:08.282 clat (msec): min=29, max=286, avg=95.37, stdev=31.94 00:25:08.282 lat (msec): min=29, max=286, avg=95.38, stdev=31.94 00:25:08.282 clat percentiles (msec): 00:25:08.282 | 1.00th=[ 45], 5.00th=[ 56], 10.00th=[ 68], 20.00th=[ 72], 00:25:08.282 | 30.00th=[ 77], 40.00th=[ 81], 50.00th=[ 89], 60.00th=[ 101], 00:25:08.282 | 70.00th=[ 109], 80.00th=[ 116], 90.00th=[ 131], 95.00th=[ 159], 00:25:08.282 | 99.00th=[ 167], 99.50th=[ 271], 99.90th=[ 288], 99.95th=[ 288], 00:25:08.282 | 99.99th=[ 288] 00:25:08.282 bw ( KiB/s): min= 344, max= 896, per=3.48%, avg=652.11, stdev=151.20, samples=19 00:25:08.282 iops : min= 86, max= 224, avg=163.00, stdev=37.78, samples=19 00:25:08.282 lat (msec) : 50=3.10%, 100=56.65%, 250=39.30%, 500=0.95% 00:25:08.282 cpu : usr=42.06%, sys=1.28%, ctx=1503, majf=0, minf=9 00:25:08.282 IO depths : 1=4.0%, 2=8.3%, 4=19.3%, 8=59.8%, 16=8.6%, 32=0.0%, >=64=0.0% 00:25:08.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.282 complete : 0=0.0%, 4=92.5%, 8=1.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.282 issued rwts: total=1677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.282 filename2: (groupid=0, jobs=1): err= 0: pid=97426: Thu Jul 25 05:31:10 2024 00:25:08.282 read: IOPS=183, BW=733KiB/s (751kB/s)(7348KiB/10021msec) 00:25:08.282 slat (usec): min=4, max=8037, avg=15.11, stdev=187.31 00:25:08.282 clat (msec): min=35, max=195, avg=87.14, stdev=28.72 00:25:08.282 lat (msec): min=35, max=195, avg=87.15, stdev=28.72 00:25:08.282 clat percentiles (msec): 00:25:08.282 | 1.00th=[ 38], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 63], 00:25:08.282 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 87], 00:25:08.282 | 70.00th=[ 105], 80.00th=[ 115], 90.00th=[ 122], 95.00th=[ 144], 00:25:08.282 | 99.00th=[ 167], 99.50th=[ 167], 99.90th=[ 197], 99.95th=[ 197], 00:25:08.282 | 99.99th=[ 197] 00:25:08.282 bw ( KiB/s): min= 510, max= 1072, per=3.89%, avg=728.10, stdev=179.92, samples=20 00:25:08.282 iops : min= 127, max= 268, avg=182.00, stdev=45.01, samples=20 00:25:08.282 lat (msec) : 50=10.13%, 100=58.57%, 250=31.30% 00:25:08.282 cpu : usr=34.77%, sys=0.88%, ctx=950, majf=0, minf=9 00:25:08.282 IO depths : 1=2.2%, 2=4.9%, 4=13.6%, 8=68.5%, 16=10.8%, 32=0.0%, >=64=0.0% 00:25:08.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.282 complete : 0=0.0%, 4=91.0%, 8=3.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.282 issued rwts: total=1837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.282 filename2: (groupid=0, jobs=1): err= 0: pid=97427: Thu Jul 25 05:31:10 2024 00:25:08.282 read: IOPS=169, BW=676KiB/s (693kB/s)(6764KiB/10001msec) 00:25:08.282 slat (usec): min=3, max=8019, avg=18.07, stdev=217.91 00:25:08.282 clat (msec): min=5, max=322, avg=94.48, stdev=34.76 00:25:08.282 lat (msec): min=5, max=322, avg=94.50, stdev=34.76 00:25:08.282 clat percentiles (msec): 00:25:08.282 | 1.00th=[ 44], 5.00th=[ 56], 10.00th=[ 66], 20.00th=[ 71], 00:25:08.282 | 30.00th=[ 74], 40.00th=[ 80], 50.00th=[ 92], 60.00th=[ 99], 00:25:08.282 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 130], 95.00th=[ 144], 00:25:08.282 | 99.00th=[ 190], 99.50th=[ 321], 99.90th=[ 321], 99.95th=[ 321], 00:25:08.282 | 99.99th=[ 321] 00:25:08.282 bw ( KiB/s): min= 256, max= 896, per=3.54%, avg=662.68, stdev=157.73, samples=19 00:25:08.282 iops : min= 64, max= 224, avg=165.63, stdev=39.41, samples=19 00:25:08.282 lat (msec) : 10=0.53%, 20=0.41%, 50=2.07%, 100=57.84%, 250=38.20% 00:25:08.282 lat (msec) : 500=0.95% 00:25:08.282 cpu : usr=37.66%, sys=1.28%, ctx=1209, majf=0, minf=9 00:25:08.282 IO depths : 1=2.7%, 2=5.7%, 4=15.9%, 8=65.7%, 16=10.0%, 32=0.0%, >=64=0.0% 00:25:08.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.282 complete : 0=0.0%, 4=91.1%, 8=3.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.282 issued rwts: total=1691,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.282 filename2: (groupid=0, jobs=1): err= 0: pid=97428: Thu Jul 25 05:31:10 2024 00:25:08.282 read: IOPS=177, BW=712KiB/s (729kB/s)(7128KiB/10017msec) 00:25:08.283 slat (usec): min=4, max=4021, avg=15.31, stdev=130.45 00:25:08.283 clat (msec): min=34, max=191, avg=89.73, stdev=28.41 00:25:08.283 lat (msec): min=34, max=191, avg=89.74, stdev=28.41 00:25:08.283 clat percentiles (msec): 00:25:08.283 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 64], 00:25:08.283 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 86], 60.00th=[ 96], 00:25:08.283 | 70.00th=[ 106], 80.00th=[ 114], 90.00th=[ 125], 95.00th=[ 146], 00:25:08.283 | 99.00th=[ 159], 99.50th=[ 184], 99.90th=[ 192], 99.95th=[ 192], 00:25:08.283 | 99.99th=[ 192] 00:25:08.283 bw ( KiB/s): min= 496, max= 1072, per=3.79%, avg=710.30, stdev=178.26, samples=20 00:25:08.283 iops : min= 124, max= 268, avg=177.55, stdev=44.56, samples=20 00:25:08.283 lat (msec) : 50=9.65%, 100=54.15%, 250=36.20% 00:25:08.283 cpu : usr=42.11%, sys=1.34%, ctx=1279, majf=0, minf=9 00:25:08.283 IO depths : 1=3.3%, 2=6.8%, 4=16.0%, 8=64.3%, 16=9.5%, 32=0.0%, >=64=0.0% 00:25:08.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.283 complete : 0=0.0%, 4=91.6%, 8=3.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.283 issued rwts: total=1782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:08.283 00:25:08.283 Run status group 0 (all jobs): 00:25:08.283 READ: bw=18.3MiB/s (19.2MB/s), 670KiB/s-979KiB/s (686kB/s-1003kB/s), io=184MiB (193MB), run=10001-10065msec 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.283 bdev_null0 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.283 [2024-07-25 05:31:10.531999] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.283 bdev_null1 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:08.283 { 00:25:08.283 "params": { 00:25:08.283 "name": "Nvme$subsystem", 00:25:08.283 "trtype": "$TEST_TRANSPORT", 00:25:08.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.283 "adrfam": "ipv4", 00:25:08.283 "trsvcid": "$NVMF_PORT", 00:25:08.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.283 "hdgst": ${hdgst:-false}, 00:25:08.283 "ddgst": ${ddgst:-false} 00:25:08.283 }, 00:25:08.283 "method": "bdev_nvme_attach_controller" 00:25:08.283 } 00:25:08.283 EOF 00:25:08.283 )") 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:08.283 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:08.284 { 00:25:08.284 "params": { 00:25:08.284 "name": "Nvme$subsystem", 00:25:08.284 "trtype": "$TEST_TRANSPORT", 00:25:08.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.284 "adrfam": "ipv4", 00:25:08.284 "trsvcid": "$NVMF_PORT", 00:25:08.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.284 "hdgst": ${hdgst:-false}, 00:25:08.284 "ddgst": ${ddgst:-false} 00:25:08.284 }, 00:25:08.284 "method": "bdev_nvme_attach_controller" 00:25:08.284 } 00:25:08.284 EOF 00:25:08.284 )") 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:08.284 "params": { 00:25:08.284 "name": "Nvme0", 00:25:08.284 "trtype": "tcp", 00:25:08.284 "traddr": "10.0.0.2", 00:25:08.284 "adrfam": "ipv4", 00:25:08.284 "trsvcid": "4420", 00:25:08.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:08.284 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:08.284 "hdgst": false, 00:25:08.284 "ddgst": false 00:25:08.284 }, 00:25:08.284 "method": "bdev_nvme_attach_controller" 00:25:08.284 },{ 00:25:08.284 "params": { 00:25:08.284 "name": "Nvme1", 00:25:08.284 "trtype": "tcp", 00:25:08.284 "traddr": "10.0.0.2", 00:25:08.284 "adrfam": "ipv4", 00:25:08.284 "trsvcid": "4420", 00:25:08.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:08.284 "hdgst": false, 00:25:08.284 "ddgst": false 00:25:08.284 }, 00:25:08.284 "method": "bdev_nvme_attach_controller" 00:25:08.284 }' 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:08.284 05:31:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:08.284 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:08.284 ... 00:25:08.284 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:08.284 ... 00:25:08.284 fio-3.35 00:25:08.284 Starting 4 threads 00:25:12.474 00:25:12.474 filename0: (groupid=0, jobs=1): err= 0: pid=97550: Thu Jul 25 05:31:16 2024 00:25:12.474 read: IOPS=1909, BW=14.9MiB/s (15.6MB/s)(74.6MiB/5001msec) 00:25:12.474 slat (usec): min=7, max=172, avg= 9.10, stdev= 3.42 00:25:12.474 clat (usec): min=1637, max=5061, avg=4143.66, stdev=114.65 00:25:12.474 lat (usec): min=1646, max=5085, avg=4152.75, stdev=114.57 00:25:12.474 clat percentiles (usec): 00:25:12.474 | 1.00th=[ 4015], 5.00th=[ 4113], 10.00th=[ 4113], 20.00th=[ 4113], 00:25:12.474 | 30.00th=[ 4146], 40.00th=[ 4146], 50.00th=[ 4146], 60.00th=[ 4146], 00:25:12.474 | 70.00th=[ 4178], 80.00th=[ 4178], 90.00th=[ 4178], 95.00th=[ 4228], 00:25:12.474 | 99.00th=[ 4293], 99.50th=[ 4293], 99.90th=[ 4490], 99.95th=[ 4621], 00:25:12.474 | 99.99th=[ 5080] 00:25:12.474 bw ( KiB/s): min=15232, max=15360, per=25.03%, avg=15274.67, stdev=64.00, samples=9 00:25:12.474 iops : min= 1904, max= 1920, avg=1909.33, stdev= 8.00, samples=9 00:25:12.474 lat (msec) : 2=0.08%, 4=0.47%, 10=99.45% 00:25:12.474 cpu : usr=94.12%, sys=4.46%, ctx=25, majf=0, minf=0 00:25:12.474 IO depths : 1=10.4%, 2=24.0%, 4=51.0%, 8=14.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:12.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.474 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.474 issued rwts: total=9550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.474 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:12.474 filename0: (groupid=0, jobs=1): err= 0: pid=97551: Thu Jul 25 05:31:16 2024 00:25:12.474 read: IOPS=1907, BW=14.9MiB/s (15.6MB/s)(74.6MiB/5003msec) 00:25:12.474 slat (nsec): min=3950, max=41645, avg=15909.14, stdev=3393.60 00:25:12.474 clat (usec): min=2586, max=5102, avg=4114.61, stdev=81.23 00:25:12.474 lat (usec): min=2592, max=5116, avg=4130.52, stdev=81.82 00:25:12.474 clat percentiles (usec): 00:25:12.474 | 1.00th=[ 4015], 5.00th=[ 4047], 10.00th=[ 4047], 20.00th=[ 4080], 00:25:12.474 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4113], 60.00th=[ 4113], 00:25:12.474 | 70.00th=[ 4146], 80.00th=[ 4146], 90.00th=[ 4178], 95.00th=[ 4178], 00:25:12.474 | 99.00th=[ 4228], 99.50th=[ 4293], 99.90th=[ 4555], 99.95th=[ 5014], 00:25:12.474 | 99.99th=[ 5080] 00:25:12.474 bw ( KiB/s): min=15232, max=15360, per=25.00%, avg=15260.60, stdev=53.22, samples=10 00:25:12.474 iops : min= 1904, max= 1920, avg=1907.50, stdev= 6.65, samples=10 00:25:12.474 lat (msec) : 4=0.42%, 10=99.58% 00:25:12.474 cpu : usr=93.50%, sys=5.32%, ctx=7, majf=0, minf=0 00:25:12.474 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:12.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.474 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.474 issued rwts: total=9544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.474 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:12.474 filename1: (groupid=0, jobs=1): err= 0: pid=97552: Thu Jul 25 05:31:16 2024 00:25:12.474 read: IOPS=1906, BW=14.9MiB/s (15.6MB/s)(74.5MiB/5001msec) 00:25:12.474 slat (nsec): min=7825, max=41375, avg=13265.12, stdev=4581.34 00:25:12.474 clat (usec): min=3079, max=5672, avg=4136.01, stdev=89.11 00:25:12.474 lat (usec): min=3093, max=5700, avg=4149.27, stdev=88.26 00:25:12.474 clat percentiles (usec): 00:25:12.474 | 1.00th=[ 4015], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4080], 00:25:12.474 | 30.00th=[ 4113], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4146], 00:25:12.474 | 70.00th=[ 4146], 80.00th=[ 4178], 90.00th=[ 4178], 95.00th=[ 4228], 00:25:12.474 | 99.00th=[ 4228], 99.50th=[ 4359], 99.90th=[ 5145], 99.95th=[ 5604], 00:25:12.475 | 99.99th=[ 5669] 00:25:12.475 bw ( KiB/s): min=15232, max=15360, per=24.98%, avg=15246.22, stdev=42.67, samples=9 00:25:12.475 iops : min= 1904, max= 1920, avg=1905.78, stdev= 5.33, samples=9 00:25:12.475 lat (msec) : 4=0.35%, 10=99.65% 00:25:12.475 cpu : usr=94.22%, sys=4.60%, ctx=76, majf=0, minf=9 00:25:12.475 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:12.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.475 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.475 issued rwts: total=9536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.475 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:12.475 filename1: (groupid=0, jobs=1): err= 0: pid=97553: Thu Jul 25 05:31:16 2024 00:25:12.475 read: IOPS=1906, BW=14.9MiB/s (15.6MB/s)(74.5MiB/5001msec) 00:25:12.475 slat (nsec): min=7858, max=50512, avg=15335.98, stdev=3567.36 00:25:12.475 clat (usec): min=2917, max=6325, avg=4118.53, stdev=104.69 00:25:12.475 lat (usec): min=2929, max=6339, avg=4133.86, stdev=105.12 00:25:12.475 clat percentiles (usec): 00:25:12.475 | 1.00th=[ 4015], 5.00th=[ 4047], 10.00th=[ 4047], 20.00th=[ 4080], 00:25:12.475 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4113], 60.00th=[ 4146], 00:25:12.475 | 70.00th=[ 4146], 80.00th=[ 4146], 90.00th=[ 4178], 95.00th=[ 4178], 00:25:12.475 | 99.00th=[ 4228], 99.50th=[ 4293], 99.90th=[ 5276], 99.95th=[ 6259], 00:25:12.475 | 99.99th=[ 6325] 00:25:12.475 bw ( KiB/s): min=15232, max=15360, per=24.98%, avg=15246.22, stdev=42.67, samples=9 00:25:12.475 iops : min= 1904, max= 1920, avg=1905.78, stdev= 5.33, samples=9 00:25:12.475 lat (msec) : 4=0.42%, 10=99.58% 00:25:12.475 cpu : usr=94.58%, sys=4.28%, ctx=9, majf=0, minf=9 00:25:12.475 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:12.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.475 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.475 issued rwts: total=9536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.475 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:12.475 00:25:12.475 Run status group 0 (all jobs): 00:25:12.475 READ: bw=59.6MiB/s (62.5MB/s), 14.9MiB/s-14.9MiB/s (15.6MB/s-15.6MB/s), io=298MiB (313MB), run=5001-5003msec 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.475 00:25:12.475 real 0m23.301s 00:25:12.475 user 2m6.282s 00:25:12.475 sys 0m5.200s 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:12.475 05:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.475 ************************************ 00:25:12.475 END TEST fio_dif_rand_params 00:25:12.475 ************************************ 00:25:12.475 05:31:16 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:25:12.475 05:31:16 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:12.475 05:31:16 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:12.475 05:31:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:12.475 ************************************ 00:25:12.475 START TEST fio_dif_digest 00:25:12.475 ************************************ 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:12.475 bdev_null0 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.475 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:12.475 [2024-07-25 05:31:16.650119] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:12.734 { 00:25:12.734 "params": { 00:25:12.734 "name": "Nvme$subsystem", 00:25:12.734 "trtype": "$TEST_TRANSPORT", 00:25:12.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.734 "adrfam": "ipv4", 00:25:12.734 "trsvcid": "$NVMF_PORT", 00:25:12.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.734 "hdgst": ${hdgst:-false}, 00:25:12.734 "ddgst": ${ddgst:-false} 00:25:12.734 }, 00:25:12.734 "method": "bdev_nvme_attach_controller" 00:25:12.734 } 00:25:12.734 EOF 00:25:12.734 )") 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:12.734 "params": { 00:25:12.734 "name": "Nvme0", 00:25:12.734 "trtype": "tcp", 00:25:12.734 "traddr": "10.0.0.2", 00:25:12.734 "adrfam": "ipv4", 00:25:12.734 "trsvcid": "4420", 00:25:12.734 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:12.734 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:12.734 "hdgst": true, 00:25:12.734 "ddgst": true 00:25:12.734 }, 00:25:12.734 "method": "bdev_nvme_attach_controller" 00:25:12.734 }' 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:12.734 05:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:12.734 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:12.734 ... 00:25:12.734 fio-3.35 00:25:12.734 Starting 3 threads 00:25:24.959 00:25:24.959 filename0: (groupid=0, jobs=1): err= 0: pid=97659: Thu Jul 25 05:31:27 2024 00:25:24.959 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(262MiB/10005msec) 00:25:24.959 slat (nsec): min=8166, max=42379, avg=13089.56, stdev=3265.55 00:25:24.959 clat (usec): min=7862, max=19377, avg=14280.63, stdev=1176.60 00:25:24.959 lat (usec): min=7873, max=19391, avg=14293.72, stdev=1176.77 00:25:24.959 clat percentiles (usec): 00:25:24.959 | 1.00th=[11469], 5.00th=[12649], 10.00th=[12911], 20.00th=[13435], 00:25:24.959 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:25:24.959 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15664], 95.00th=[16188], 00:25:24.959 | 99.00th=[17695], 99.50th=[17957], 99.90th=[19006], 99.95th=[19268], 00:25:24.959 | 99.99th=[19268] 00:25:24.959 bw ( KiB/s): min=25344, max=28047, per=34.80%, avg=26887.53, stdev=668.56, samples=19 00:25:24.959 iops : min= 198, max= 219, avg=210.05, stdev= 5.21, samples=19 00:25:24.959 lat (msec) : 10=0.52%, 20=99.48% 00:25:24.959 cpu : usr=92.97%, sys=5.67%, ctx=122, majf=0, minf=0 00:25:24.959 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:24.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.959 issued rwts: total=2099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.959 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:24.959 filename0: (groupid=0, jobs=1): err= 0: pid=97660: Thu Jul 25 05:31:27 2024 00:25:24.959 read: IOPS=162, BW=20.3MiB/s (21.3MB/s)(204MiB/10046msec) 00:25:24.959 slat (nsec): min=8241, max=46849, avg=13896.06, stdev=3152.59 00:25:24.959 clat (usec): min=11320, max=60710, avg=18397.26, stdev=1631.28 00:25:24.959 lat (usec): min=11333, max=60723, avg=18411.15, stdev=1631.45 00:25:24.959 clat percentiles (usec): 00:25:24.959 | 1.00th=[15795], 5.00th=[16909], 10.00th=[17171], 20.00th=[17695], 00:25:24.959 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18482], 60.00th=[18482], 00:25:24.959 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19530], 95.00th=[19792], 00:25:24.959 | 99.00th=[21103], 99.50th=[22152], 99.90th=[46924], 99.95th=[60556], 00:25:24.959 | 99.99th=[60556] 00:25:24.959 bw ( KiB/s): min=19968, max=21547, per=27.04%, avg=20887.45, stdev=407.46, samples=20 00:25:24.959 iops : min= 156, max= 168, avg=163.15, stdev= 3.13, samples=20 00:25:24.959 lat (msec) : 20=96.57%, 50=3.37%, 100=0.06% 00:25:24.959 cpu : usr=92.99%, sys=5.75%, ctx=29, majf=0, minf=0 00:25:24.959 IO depths : 1=6.7%, 2=93.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:24.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.959 issued rwts: total=1634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.959 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:24.959 filename0: (groupid=0, jobs=1): err= 0: pid=97661: Thu Jul 25 05:31:27 2024 00:25:24.959 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(291MiB/10007msec) 00:25:24.959 slat (nsec): min=5087, max=37632, avg=12728.93, stdev=2216.89 00:25:24.959 clat (usec): min=7743, max=54491, avg=12866.32, stdev=1681.66 00:25:24.959 lat (usec): min=7755, max=54504, avg=12879.05, stdev=1681.75 00:25:24.959 clat percentiles (usec): 00:25:24.959 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11863], 20.00th=[12256], 00:25:24.959 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:25:24.959 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13698], 95.00th=[14091], 00:25:24.959 | 99.00th=[15533], 99.50th=[15795], 99.90th=[53216], 99.95th=[54264], 00:25:24.959 | 99.99th=[54264] 00:25:24.959 bw ( KiB/s): min=27648, max=30464, per=38.60%, avg=29818.16, stdev=714.84, samples=19 00:25:24.959 iops : min= 216, max= 238, avg=232.89, stdev= 5.59, samples=19 00:25:24.959 lat (msec) : 10=0.04%, 20=99.83%, 100=0.13% 00:25:24.959 cpu : usr=92.69%, sys=5.96%, ctx=9, majf=0, minf=9 00:25:24.959 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:24.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.959 issued rwts: total=2330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.959 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:24.959 00:25:24.959 Run status group 0 (all jobs): 00:25:24.959 READ: bw=75.4MiB/s (79.1MB/s), 20.3MiB/s-29.1MiB/s (21.3MB/s-30.5MB/s), io=758MiB (795MB), run=10005-10046msec 00:25:24.959 05:31:27 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:25:24.959 05:31:27 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:25:24.959 05:31:27 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:25:24.959 05:31:27 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:24.959 05:31:27 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:25:24.960 05:31:27 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:24.960 05:31:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.960 05:31:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:24.960 05:31:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.960 05:31:27 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:24.960 05:31:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.960 05:31:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:24.960 ************************************ 00:25:24.960 END TEST fio_dif_digest 00:25:24.960 ************************************ 00:25:24.960 05:31:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.960 00:25:24.960 real 0m10.909s 00:25:24.960 user 0m28.525s 00:25:24.960 sys 0m1.961s 00:25:24.960 05:31:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:24.960 05:31:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:24.960 05:31:27 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:25:24.960 05:31:27 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:25:24.960 05:31:27 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:24.960 05:31:27 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:25:24.960 05:31:27 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:24.960 05:31:27 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:25:24.960 05:31:27 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:24.960 05:31:27 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:24.960 rmmod nvme_tcp 00:25:24.960 rmmod nvme_fabrics 00:25:24.960 rmmod nvme_keyring 00:25:24.960 05:31:27 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:24.960 05:31:27 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:25:24.960 05:31:27 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:25:24.960 05:31:27 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 96907 ']' 00:25:24.960 05:31:27 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 96907 00:25:24.960 05:31:27 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 96907 ']' 00:25:24.960 05:31:27 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 96907 00:25:24.960 05:31:27 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:25:24.960 05:31:27 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:24.960 05:31:27 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96907 00:25:24.960 killing process with pid 96907 00:25:24.960 05:31:27 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:24.960 05:31:27 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:24.960 05:31:27 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96907' 00:25:24.960 05:31:27 nvmf_dif -- common/autotest_common.sh@969 -- # kill 96907 00:25:24.960 05:31:27 nvmf_dif -- common/autotest_common.sh@974 -- # wait 96907 00:25:24.960 05:31:27 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:25:24.960 05:31:27 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:24.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:24.960 Waiting for block devices as requested 00:25:24.960 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:24.960 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:24.960 05:31:28 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:24.960 05:31:28 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:24.960 05:31:28 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:24.960 05:31:28 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:24.960 05:31:28 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.960 05:31:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:24.960 05:31:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.960 05:31:28 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:24.960 ************************************ 00:25:24.960 END TEST nvmf_dif 00:25:24.960 ************************************ 00:25:24.960 00:25:24.960 real 0m59.264s 00:25:24.960 user 3m50.622s 00:25:24.960 sys 0m14.810s 00:25:24.960 05:31:28 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:24.960 05:31:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:24.960 05:31:28 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:24.960 05:31:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:24.960 05:31:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:24.960 05:31:28 -- common/autotest_common.sh@10 -- # set +x 00:25:24.960 ************************************ 00:25:24.960 START TEST nvmf_abort_qd_sizes 00:25:24.960 ************************************ 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:24.960 * Looking for test storage... 00:25:24.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.960 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:24.961 Cannot find device "nvmf_tgt_br" 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:24.961 Cannot find device "nvmf_tgt_br2" 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:24.961 Cannot find device "nvmf_tgt_br" 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:24.961 Cannot find device "nvmf_tgt_br2" 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:24.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:24.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:24.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:24.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:25:24.961 00:25:24.961 --- 10.0.0.2 ping statistics --- 00:25:24.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.961 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:24.961 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:24.961 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:25:24.961 00:25:24.961 --- 10.0.0.3 ping statistics --- 00:25:24.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.961 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:24.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:24.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:25:24.961 00:25:24.961 --- 10.0.0.1 ping statistics --- 00:25:24.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.961 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:24.961 05:31:28 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:25.526 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:25.526 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:25.526 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=98247 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 98247 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 98247 ']' 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:25.784 05:31:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:25.784 [2024-07-25 05:31:29.789937] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:25:25.784 [2024-07-25 05:31:29.790047] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.784 [2024-07-25 05:31:29.930372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:26.042 [2024-07-25 05:31:29.990067] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.042 [2024-07-25 05:31:29.990285] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.042 [2024-07-25 05:31:29.990421] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:26.042 [2024-07-25 05:31:29.990552] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:26.042 [2024-07-25 05:31:29.990588] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.042 [2024-07-25 05:31:29.990797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.042 [2024-07-25 05:31:29.990916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:26.042 [2024-07-25 05:31:29.991534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:26.042 [2024-07-25 05:31:29.991547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:25:26.042 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:25:26.043 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:25:26.043 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:25:26.043 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:25:26.043 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:25:26.043 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:25:26.043 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:25:26.043 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:25:26.043 05:31:30 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:25:26.043 05:31:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:25:26.043 05:31:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:25:26.043 05:31:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:25:26.043 05:31:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:26.043 05:31:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:26.043 05:31:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:26.043 ************************************ 00:25:26.043 START TEST spdk_target_abort 00:25:26.043 ************************************ 00:25:26.043 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:25:26.043 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:25:26.043 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:25:26.043 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.043 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:26.300 spdk_targetn1 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:26.301 [2024-07-25 05:31:30.232288] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:26.301 [2024-07-25 05:31:30.260421] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:26.301 05:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:29.582 Initializing NVMe Controllers 00:25:29.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:25:29.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:29.582 Initialization complete. Launching workers. 00:25:29.582 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11391, failed: 0 00:25:29.582 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1037, failed to submit 10354 00:25:29.582 success 758, unsuccess 279, failed 0 00:25:29.582 05:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:29.582 05:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:32.863 Initializing NVMe Controllers 00:25:32.863 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:25:32.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:32.863 Initialization complete. Launching workers. 00:25:32.863 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5861, failed: 0 00:25:32.863 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1239, failed to submit 4622 00:25:32.863 success 237, unsuccess 1002, failed 0 00:25:32.863 05:31:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:32.863 05:31:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:36.163 Initializing NVMe Controllers 00:25:36.163 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:25:36.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:36.163 Initialization complete. Launching workers. 00:25:36.163 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30674, failed: 0 00:25:36.163 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2677, failed to submit 27997 00:25:36.163 success 450, unsuccess 2227, failed 0 00:25:36.163 05:31:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:25:36.163 05:31:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.163 05:31:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:36.163 05:31:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.163 05:31:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:25:36.163 05:31:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.163 05:31:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:37.097 05:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.097 05:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 98247 00:25:37.097 05:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 98247 ']' 00:25:37.097 05:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 98247 00:25:37.097 05:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:25:37.097 05:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:37.097 05:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98247 00:25:37.097 killing process with pid 98247 00:25:37.097 05:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:37.097 05:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:37.097 05:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98247' 00:25:37.097 05:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 98247 00:25:37.097 05:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 98247 00:25:37.355 ************************************ 00:25:37.355 END TEST spdk_target_abort 00:25:37.355 ************************************ 00:25:37.355 00:25:37.355 real 0m11.158s 00:25:37.355 user 0m42.597s 00:25:37.355 sys 0m1.712s 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:37.355 05:31:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:25:37.355 05:31:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:37.355 05:31:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:37.355 05:31:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:37.355 ************************************ 00:25:37.355 START TEST kernel_target_abort 00:25:37.355 ************************************ 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:37.355 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:37.356 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:37.614 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:37.614 Waiting for block devices as requested 00:25:37.614 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:37.873 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:37.873 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:37.873 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:37.873 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:37.873 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:37.873 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:37.873 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:37.873 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:37.873 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:37.873 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:37.873 No valid GPT data, bailing 00:25:37.873 05:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:37.873 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:25:37.873 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:25:37.873 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:37.873 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:37.873 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:37.873 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:25:37.873 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:25:37.873 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:37.873 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:37.873 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:25:37.873 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:25:37.873 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:38.132 No valid GPT data, bailing 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:38.132 No valid GPT data, bailing 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:38.132 No valid GPT data, bailing 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 --hostid=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 -a 10.0.0.1 -t tcp -s 4420 00:25:38.132 00:25:38.132 Discovery Log Number of Records 2, Generation counter 2 00:25:38.132 =====Discovery Log Entry 0====== 00:25:38.132 trtype: tcp 00:25:38.132 adrfam: ipv4 00:25:38.132 subtype: current discovery subsystem 00:25:38.132 treq: not specified, sq flow control disable supported 00:25:38.132 portid: 1 00:25:38.132 trsvcid: 4420 00:25:38.132 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:38.132 traddr: 10.0.0.1 00:25:38.132 eflags: none 00:25:38.132 sectype: none 00:25:38.132 =====Discovery Log Entry 1====== 00:25:38.132 trtype: tcp 00:25:38.132 adrfam: ipv4 00:25:38.132 subtype: nvme subsystem 00:25:38.132 treq: not specified, sq flow control disable supported 00:25:38.132 portid: 1 00:25:38.132 trsvcid: 4420 00:25:38.132 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:38.132 traddr: 10.0.0.1 00:25:38.132 eflags: none 00:25:38.132 sectype: none 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:38.132 05:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:41.440 Initializing NVMe Controllers 00:25:41.441 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:41.441 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:41.441 Initialization complete. Launching workers. 00:25:41.441 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34765, failed: 0 00:25:41.441 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34765, failed to submit 0 00:25:41.441 success 0, unsuccess 34765, failed 0 00:25:41.441 05:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:41.441 05:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:44.724 Initializing NVMe Controllers 00:25:44.724 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:44.724 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:44.724 Initialization complete. Launching workers. 00:25:44.724 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66603, failed: 0 00:25:44.724 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28911, failed to submit 37692 00:25:44.724 success 0, unsuccess 28911, failed 0 00:25:44.724 05:31:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:44.724 05:31:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:48.009 Initializing NVMe Controllers 00:25:48.009 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:48.009 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:48.009 Initialization complete. Launching workers. 00:25:48.009 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 83540, failed: 0 00:25:48.009 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20892, failed to submit 62648 00:25:48.009 success 0, unsuccess 20892, failed 0 00:25:48.009 05:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:25:48.009 05:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:48.009 05:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:25:48.009 05:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:48.009 05:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:48.009 05:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:48.009 05:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:48.009 05:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:48.009 05:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:48.009 05:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:48.575 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:50.474 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:50.474 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:50.474 00:25:50.474 real 0m13.272s 00:25:50.474 user 0m6.335s 00:25:50.474 sys 0m4.379s 00:25:50.474 05:31:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:50.474 05:31:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:50.474 ************************************ 00:25:50.474 END TEST kernel_target_abort 00:25:50.474 ************************************ 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:50.733 rmmod nvme_tcp 00:25:50.733 rmmod nvme_fabrics 00:25:50.733 rmmod nvme_keyring 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 98247 ']' 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 98247 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 98247 ']' 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 98247 00:25:50.733 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (98247) - No such process 00:25:50.733 Process with pid 98247 is not found 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 98247 is not found' 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:25:50.733 05:31:54 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:50.991 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:50.991 Waiting for block devices as requested 00:25:50.991 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:51.249 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:51.249 05:31:55 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:51.249 05:31:55 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:51.249 05:31:55 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:51.249 05:31:55 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:51.249 05:31:55 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.249 05:31:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:51.249 05:31:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.249 05:31:55 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:51.249 00:25:51.249 real 0m26.866s 00:25:51.249 user 0m49.854s 00:25:51.249 sys 0m7.343s 00:25:51.249 05:31:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:51.249 ************************************ 00:25:51.249 05:31:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:51.249 END TEST nvmf_abort_qd_sizes 00:25:51.249 ************************************ 00:25:51.249 05:31:55 -- spdk/autotest.sh@299 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:51.249 05:31:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:51.250 05:31:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:51.250 05:31:55 -- common/autotest_common.sh@10 -- # set +x 00:25:51.250 ************************************ 00:25:51.250 START TEST keyring_file 00:25:51.250 ************************************ 00:25:51.250 05:31:55 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:51.509 * Looking for test storage... 00:25:51.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:51.509 05:31:55 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:51.509 05:31:55 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.509 05:31:55 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.509 05:31:55 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.509 05:31:55 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.509 05:31:55 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.509 05:31:55 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.509 05:31:55 keyring_file -- paths/export.sh@5 -- # export PATH 00:25:51.509 05:31:55 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@47 -- # : 0 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:51.509 05:31:55 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:51.509 05:31:55 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:51.509 05:31:55 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:25:51.509 05:31:55 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:25:51.509 05:31:55 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:25:51.509 05:31:55 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DnLZtikUZM 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DnLZtikUZM 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DnLZtikUZM 00:25:51.509 05:31:55 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.DnLZtikUZM 00:25:51.509 05:31:55 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@17 -- # name=key1 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.t3wkXBuHhb 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:25:51.509 05:31:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.t3wkXBuHhb 00:25:51.509 05:31:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.t3wkXBuHhb 00:25:51.509 05:31:55 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.t3wkXBuHhb 00:25:51.509 05:31:55 keyring_file -- keyring/file.sh@30 -- # tgtpid=99117 00:25:51.509 05:31:55 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:51.509 05:31:55 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99117 00:25:51.509 05:31:55 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99117 ']' 00:25:51.509 05:31:55 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.509 05:31:55 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:51.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.509 05:31:55 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.509 05:31:55 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:51.509 05:31:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:51.768 [2024-07-25 05:31:55.700418] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:25:51.768 [2024-07-25 05:31:55.700515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99117 ] 00:25:51.768 [2024-07-25 05:31:55.841365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.768 [2024-07-25 05:31:55.911569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:25:52.736 05:31:56 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:52.736 [2024-07-25 05:31:56.717651] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.736 null0 00:25:52.736 [2024-07-25 05:31:56.749637] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:52.736 [2024-07-25 05:31:56.749875] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:52.736 [2024-07-25 05:31:56.757616] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.736 05:31:56 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:52.736 [2024-07-25 05:31:56.769626] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:25:52.736 2024/07/25 05:31:56 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:25:52.736 request: 00:25:52.736 { 00:25:52.736 "method": "nvmf_subsystem_add_listener", 00:25:52.736 "params": { 00:25:52.736 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:25:52.736 "secure_channel": false, 00:25:52.736 "listen_address": { 00:25:52.736 "trtype": "tcp", 00:25:52.736 "traddr": "127.0.0.1", 00:25:52.736 "trsvcid": "4420" 00:25:52.736 } 00:25:52.736 } 00:25:52.736 } 00:25:52.736 Got JSON-RPC error response 00:25:52.736 GoRPCClient: error on JSON-RPC call 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:52.736 05:31:56 keyring_file -- keyring/file.sh@46 -- # bperfpid=99152 00:25:52.736 05:31:56 keyring_file -- keyring/file.sh@48 -- # waitforlisten 99152 /var/tmp/bperf.sock 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99152 ']' 00:25:52.736 05:31:56 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:52.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:52.736 05:31:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:52.736 [2024-07-25 05:31:56.831896] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:25:52.736 [2024-07-25 05:31:56.832027] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99152 ] 00:25:52.994 [2024-07-25 05:31:56.970696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.994 [2024-07-25 05:31:57.041252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.928 05:31:57 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:53.928 05:31:57 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:25:53.928 05:31:57 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DnLZtikUZM 00:25:53.928 05:31:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DnLZtikUZM 00:25:54.186 05:31:58 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.t3wkXBuHhb 00:25:54.186 05:31:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.t3wkXBuHhb 00:25:54.445 05:31:58 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:25:54.445 05:31:58 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:25:54.445 05:31:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:54.445 05:31:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:54.445 05:31:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:54.703 05:31:58 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.DnLZtikUZM == \/\t\m\p\/\t\m\p\.\D\n\L\Z\t\i\k\U\Z\M ]] 00:25:54.703 05:31:58 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:25:54.703 05:31:58 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:25:54.703 05:31:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:54.703 05:31:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:54.703 05:31:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:54.961 05:31:59 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.t3wkXBuHhb == \/\t\m\p\/\t\m\p\.\t\3\w\k\X\B\u\H\h\b ]] 00:25:54.961 05:31:59 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:25:54.961 05:31:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:54.961 05:31:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:54.961 05:31:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:54.961 05:31:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:54.961 05:31:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:55.219 05:31:59 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:25:55.219 05:31:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:25:55.219 05:31:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:55.219 05:31:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:55.219 05:31:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:55.219 05:31:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:55.219 05:31:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:55.477 05:31:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:25:55.477 05:31:59 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:55.477 05:31:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:55.735 [2024-07-25 05:31:59.904847] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:55.993 nvme0n1 00:25:55.993 05:31:59 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:25:55.993 05:31:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:55.993 05:31:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:55.993 05:31:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:55.993 05:31:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:55.993 05:31:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:56.251 05:32:00 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:25:56.251 05:32:00 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:25:56.251 05:32:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:56.251 05:32:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:56.251 05:32:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:56.251 05:32:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:56.251 05:32:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:56.509 05:32:00 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:25:56.509 05:32:00 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:56.768 Running I/O for 1 seconds... 00:25:57.758 00:25:57.758 Latency(us) 00:25:57.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.758 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:25:57.758 nvme0n1 : 1.01 11124.09 43.45 0.00 0.00 11461.11 3872.58 17039.36 00:25:57.758 =================================================================================================================== 00:25:57.758 Total : 11124.09 43.45 0.00 0.00 11461.11 3872.58 17039.36 00:25:57.758 0 00:25:57.758 05:32:01 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:57.758 05:32:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:58.016 05:32:02 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:25:58.016 05:32:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:58.016 05:32:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:58.016 05:32:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:58.016 05:32:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:58.016 05:32:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:58.275 05:32:02 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:25:58.275 05:32:02 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:25:58.275 05:32:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:58.275 05:32:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:58.275 05:32:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:58.275 05:32:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:58.275 05:32:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:58.534 05:32:02 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:25:58.534 05:32:02 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:58.534 05:32:02 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:58.534 05:32:02 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:58.534 05:32:02 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:58.534 05:32:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:58.534 05:32:02 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:58.534 05:32:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:58.534 05:32:02 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:58.534 05:32:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:58.793 [2024-07-25 05:32:02.894366] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:58.793 [2024-07-25 05:32:02.894799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1325f30 (107): Transport endpoint is not connected 00:25:58.793 [2024-07-25 05:32:02.895789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1325f30 (9): Bad file descriptor 00:25:58.793 [2024-07-25 05:32:02.896785] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:58.793 [2024-07-25 05:32:02.896808] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:58.793 [2024-07-25 05:32:02.896819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:58.793 2024/07/25 05:32:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:25:58.793 request: 00:25:58.793 { 00:25:58.793 "method": "bdev_nvme_attach_controller", 00:25:58.793 "params": { 00:25:58.793 "name": "nvme0", 00:25:58.793 "trtype": "tcp", 00:25:58.793 "traddr": "127.0.0.1", 00:25:58.793 "adrfam": "ipv4", 00:25:58.793 "trsvcid": "4420", 00:25:58.793 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:58.793 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:58.793 "prchk_reftag": false, 00:25:58.793 "prchk_guard": false, 00:25:58.793 "hdgst": false, 00:25:58.793 "ddgst": false, 00:25:58.793 "psk": "key1" 00:25:58.793 } 00:25:58.793 } 00:25:58.793 Got JSON-RPC error response 00:25:58.793 GoRPCClient: error on JSON-RPC call 00:25:58.793 05:32:02 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:58.793 05:32:02 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:58.793 05:32:02 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:58.793 05:32:02 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:58.793 05:32:02 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:25:58.793 05:32:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:58.793 05:32:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:58.793 05:32:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:58.793 05:32:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:58.793 05:32:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:59.051 05:32:03 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:25:59.051 05:32:03 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:25:59.051 05:32:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:59.051 05:32:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:59.051 05:32:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:59.051 05:32:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:59.051 05:32:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:59.616 05:32:03 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:25:59.616 05:32:03 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:25:59.616 05:32:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:59.616 05:32:03 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:25:59.616 05:32:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:25:59.874 05:32:04 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:25:59.874 05:32:04 keyring_file -- keyring/file.sh@77 -- # jq length 00:25:59.874 05:32:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:00.132 05:32:04 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:26:00.132 05:32:04 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.DnLZtikUZM 00:26:00.132 05:32:04 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.DnLZtikUZM 00:26:00.132 05:32:04 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:00.132 05:32:04 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.DnLZtikUZM 00:26:00.132 05:32:04 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:00.133 05:32:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:00.133 05:32:04 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:00.133 05:32:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:00.133 05:32:04 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DnLZtikUZM 00:26:00.133 05:32:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DnLZtikUZM 00:26:00.391 [2024-07-25 05:32:04.525228] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DnLZtikUZM': 0100660 00:26:00.391 [2024-07-25 05:32:04.525274] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:00.391 2024/07/25 05:32:04 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.DnLZtikUZM], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:26:00.391 request: 00:26:00.391 { 00:26:00.391 "method": "keyring_file_add_key", 00:26:00.391 "params": { 00:26:00.391 "name": "key0", 00:26:00.391 "path": "/tmp/tmp.DnLZtikUZM" 00:26:00.391 } 00:26:00.391 } 00:26:00.391 Got JSON-RPC error response 00:26:00.391 GoRPCClient: error on JSON-RPC call 00:26:00.391 05:32:04 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:00.391 05:32:04 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:00.391 05:32:04 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:00.391 05:32:04 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:00.391 05:32:04 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.DnLZtikUZM 00:26:00.391 05:32:04 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DnLZtikUZM 00:26:00.391 05:32:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DnLZtikUZM 00:26:00.649 05:32:04 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.DnLZtikUZM 00:26:00.649 05:32:04 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:26:00.649 05:32:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:00.649 05:32:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:00.649 05:32:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:00.649 05:32:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:00.649 05:32:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:01.215 05:32:05 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:26:01.215 05:32:05 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:01.215 05:32:05 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:01.215 05:32:05 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:01.215 05:32:05 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:01.215 05:32:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:01.216 05:32:05 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:01.216 05:32:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:01.216 05:32:05 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:01.216 05:32:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:01.216 [2024-07-25 05:32:05.385408] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.DnLZtikUZM': No such file or directory 00:26:01.216 [2024-07-25 05:32:05.385450] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:26:01.216 [2024-07-25 05:32:05.385476] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:26:01.216 [2024-07-25 05:32:05.385484] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:01.216 [2024-07-25 05:32:05.385493] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:26:01.216 2024/07/25 05:32:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:26:01.216 request: 00:26:01.216 { 00:26:01.216 "method": "bdev_nvme_attach_controller", 00:26:01.216 "params": { 00:26:01.216 "name": "nvme0", 00:26:01.216 "trtype": "tcp", 00:26:01.216 "traddr": "127.0.0.1", 00:26:01.216 "adrfam": "ipv4", 00:26:01.216 "trsvcid": "4420", 00:26:01.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:01.216 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:01.216 "prchk_reftag": false, 00:26:01.216 "prchk_guard": false, 00:26:01.216 "hdgst": false, 00:26:01.216 "ddgst": false, 00:26:01.216 "psk": "key0" 00:26:01.216 } 00:26:01.216 } 00:26:01.216 Got JSON-RPC error response 00:26:01.216 GoRPCClient: error on JSON-RPC call 00:26:01.474 05:32:05 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:01.474 05:32:05 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:01.474 05:32:05 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:01.474 05:32:05 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:01.474 05:32:05 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:26:01.474 05:32:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:01.474 05:32:05 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:01.474 05:32:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:01.474 05:32:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:01.474 05:32:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:01.474 05:32:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:01.734 05:32:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:01.734 05:32:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.szp2JgYiIQ 00:26:01.734 05:32:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:01.734 05:32:05 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:01.734 05:32:05 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:01.734 05:32:05 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:01.734 05:32:05 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:26:01.734 05:32:05 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:01.734 05:32:05 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:01.734 05:32:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.szp2JgYiIQ 00:26:01.734 05:32:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.szp2JgYiIQ 00:26:01.734 05:32:05 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.szp2JgYiIQ 00:26:01.734 05:32:05 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.szp2JgYiIQ 00:26:01.734 05:32:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.szp2JgYiIQ 00:26:01.993 05:32:05 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:01.993 05:32:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:02.252 nvme0n1 00:26:02.252 05:32:06 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:26:02.252 05:32:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:02.252 05:32:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:02.252 05:32:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:02.252 05:32:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:02.252 05:32:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:02.510 05:32:06 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:26:02.510 05:32:06 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:26:02.510 05:32:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:02.769 05:32:06 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:26:02.769 05:32:06 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:26:02.769 05:32:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:02.769 05:32:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:02.769 05:32:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:03.027 05:32:07 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:26:03.027 05:32:07 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:26:03.027 05:32:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:03.027 05:32:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:03.027 05:32:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:03.027 05:32:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:03.027 05:32:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:03.284 05:32:07 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:26:03.284 05:32:07 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:03.284 05:32:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:03.543 05:32:07 keyring_file -- keyring/file.sh@104 -- # jq length 00:26:03.543 05:32:07 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:26:03.543 05:32:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:03.801 05:32:07 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:26:03.801 05:32:07 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.szp2JgYiIQ 00:26:03.801 05:32:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.szp2JgYiIQ 00:26:04.060 05:32:08 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.t3wkXBuHhb 00:26:04.060 05:32:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.t3wkXBuHhb 00:26:04.318 05:32:08 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:04.318 05:32:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:04.885 nvme0n1 00:26:04.885 05:32:08 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:26:04.885 05:32:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:26:05.144 05:32:09 keyring_file -- keyring/file.sh@112 -- # config='{ 00:26:05.144 "subsystems": [ 00:26:05.144 { 00:26:05.144 "subsystem": "keyring", 00:26:05.144 "config": [ 00:26:05.144 { 00:26:05.144 "method": "keyring_file_add_key", 00:26:05.144 "params": { 00:26:05.144 "name": "key0", 00:26:05.144 "path": "/tmp/tmp.szp2JgYiIQ" 00:26:05.144 } 00:26:05.144 }, 00:26:05.144 { 00:26:05.144 "method": "keyring_file_add_key", 00:26:05.145 "params": { 00:26:05.145 "name": "key1", 00:26:05.145 "path": "/tmp/tmp.t3wkXBuHhb" 00:26:05.145 } 00:26:05.145 } 00:26:05.145 ] 00:26:05.145 }, 00:26:05.145 { 00:26:05.145 "subsystem": "iobuf", 00:26:05.145 "config": [ 00:26:05.145 { 00:26:05.145 "method": "iobuf_set_options", 00:26:05.145 "params": { 00:26:05.145 "large_bufsize": 135168, 00:26:05.145 "large_pool_count": 1024, 00:26:05.145 "small_bufsize": 8192, 00:26:05.145 "small_pool_count": 8192 00:26:05.145 } 00:26:05.145 } 00:26:05.145 ] 00:26:05.145 }, 00:26:05.145 { 00:26:05.145 "subsystem": "sock", 00:26:05.145 "config": [ 00:26:05.145 { 00:26:05.145 "method": "sock_set_default_impl", 00:26:05.145 "params": { 00:26:05.145 "impl_name": "posix" 00:26:05.145 } 00:26:05.145 }, 00:26:05.145 { 00:26:05.145 "method": "sock_impl_set_options", 00:26:05.145 "params": { 00:26:05.145 "enable_ktls": false, 00:26:05.145 "enable_placement_id": 0, 00:26:05.145 "enable_quickack": false, 00:26:05.145 "enable_recv_pipe": true, 00:26:05.145 "enable_zerocopy_send_client": false, 00:26:05.145 "enable_zerocopy_send_server": true, 00:26:05.145 "impl_name": "ssl", 00:26:05.145 "recv_buf_size": 4096, 00:26:05.145 "send_buf_size": 4096, 00:26:05.145 "tls_version": 0, 00:26:05.145 "zerocopy_threshold": 0 00:26:05.145 } 00:26:05.145 }, 00:26:05.145 { 00:26:05.145 "method": "sock_impl_set_options", 00:26:05.145 "params": { 00:26:05.145 "enable_ktls": false, 00:26:05.145 "enable_placement_id": 0, 00:26:05.145 "enable_quickack": false, 00:26:05.145 "enable_recv_pipe": true, 00:26:05.145 "enable_zerocopy_send_client": false, 00:26:05.145 "enable_zerocopy_send_server": true, 00:26:05.145 "impl_name": "posix", 00:26:05.145 "recv_buf_size": 2097152, 00:26:05.145 "send_buf_size": 2097152, 00:26:05.145 "tls_version": 0, 00:26:05.145 "zerocopy_threshold": 0 00:26:05.145 } 00:26:05.145 } 00:26:05.145 ] 00:26:05.145 }, 00:26:05.145 { 00:26:05.145 "subsystem": "vmd", 00:26:05.145 "config": [] 00:26:05.145 }, 00:26:05.145 { 00:26:05.145 "subsystem": "accel", 00:26:05.145 "config": [ 00:26:05.145 { 00:26:05.145 "method": "accel_set_options", 00:26:05.145 "params": { 00:26:05.145 "buf_count": 2048, 00:26:05.145 "large_cache_size": 16, 00:26:05.145 "sequence_count": 2048, 00:26:05.145 "small_cache_size": 128, 00:26:05.145 "task_count": 2048 00:26:05.145 } 00:26:05.145 } 00:26:05.145 ] 00:26:05.145 }, 00:26:05.145 { 00:26:05.145 "subsystem": "bdev", 00:26:05.145 "config": [ 00:26:05.145 { 00:26:05.145 "method": "bdev_set_options", 00:26:05.145 "params": { 00:26:05.145 "bdev_auto_examine": true, 00:26:05.145 "bdev_io_cache_size": 256, 00:26:05.145 "bdev_io_pool_size": 65535, 00:26:05.145 "iobuf_large_cache_size": 16, 00:26:05.145 "iobuf_small_cache_size": 128 00:26:05.145 } 00:26:05.145 }, 00:26:05.145 { 00:26:05.145 "method": "bdev_raid_set_options", 00:26:05.145 "params": { 00:26:05.145 "process_max_bandwidth_mb_sec": 0, 00:26:05.145 "process_window_size_kb": 1024 00:26:05.145 } 00:26:05.145 }, 00:26:05.145 { 00:26:05.145 "method": "bdev_iscsi_set_options", 00:26:05.145 "params": { 00:26:05.145 "timeout_sec": 30 00:26:05.145 } 00:26:05.145 }, 00:26:05.145 { 00:26:05.145 "method": "bdev_nvme_set_options", 00:26:05.145 "params": { 00:26:05.145 "action_on_timeout": "none", 00:26:05.145 "allow_accel_sequence": false, 00:26:05.145 "arbitration_burst": 0, 00:26:05.145 "bdev_retry_count": 3, 00:26:05.145 "ctrlr_loss_timeout_sec": 0, 00:26:05.145 "delay_cmd_submit": true, 00:26:05.145 "dhchap_dhgroups": [ 00:26:05.145 "null", 00:26:05.145 "ffdhe2048", 00:26:05.145 "ffdhe3072", 00:26:05.145 "ffdhe4096", 00:26:05.145 "ffdhe6144", 00:26:05.145 "ffdhe8192" 00:26:05.145 ], 00:26:05.145 "dhchap_digests": [ 00:26:05.145 "sha256", 00:26:05.145 "sha384", 00:26:05.145 "sha512" 00:26:05.145 ], 00:26:05.145 "disable_auto_failback": false, 00:26:05.145 "fast_io_fail_timeout_sec": 0, 00:26:05.145 "generate_uuids": false, 00:26:05.145 "high_priority_weight": 0, 00:26:05.145 "io_path_stat": false, 00:26:05.145 "io_queue_requests": 512, 00:26:05.145 "keep_alive_timeout_ms": 10000, 00:26:05.145 "low_priority_weight": 0, 00:26:05.145 "medium_priority_weight": 0, 00:26:05.145 "nvme_adminq_poll_period_us": 10000, 00:26:05.145 "nvme_error_stat": false, 00:26:05.145 "nvme_ioq_poll_period_us": 0, 00:26:05.145 "rdma_cm_event_timeout_ms": 0, 00:26:05.145 "rdma_max_cq_size": 0, 00:26:05.146 "rdma_srq_size": 0, 00:26:05.146 "reconnect_delay_sec": 0, 00:26:05.146 "timeout_admin_us": 0, 00:26:05.146 "timeout_us": 0, 00:26:05.146 "transport_ack_timeout": 0, 00:26:05.146 "transport_retry_count": 4, 00:26:05.146 "transport_tos": 0 00:26:05.146 } 00:26:05.146 }, 00:26:05.146 { 00:26:05.146 "method": "bdev_nvme_attach_controller", 00:26:05.146 "params": { 00:26:05.146 "adrfam": "IPv4", 00:26:05.146 "ctrlr_loss_timeout_sec": 0, 00:26:05.146 "ddgst": false, 00:26:05.146 "fast_io_fail_timeout_sec": 0, 00:26:05.146 "hdgst": false, 00:26:05.146 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:05.146 "name": "nvme0", 00:26:05.146 "prchk_guard": false, 00:26:05.146 "prchk_reftag": false, 00:26:05.146 "psk": "key0", 00:26:05.146 "reconnect_delay_sec": 0, 00:26:05.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:05.146 "traddr": "127.0.0.1", 00:26:05.146 "trsvcid": "4420", 00:26:05.146 "trtype": "TCP" 00:26:05.146 } 00:26:05.146 }, 00:26:05.146 { 00:26:05.146 "method": "bdev_nvme_set_hotplug", 00:26:05.146 "params": { 00:26:05.146 "enable": false, 00:26:05.146 "period_us": 100000 00:26:05.146 } 00:26:05.146 }, 00:26:05.146 { 00:26:05.146 "method": "bdev_wait_for_examine" 00:26:05.146 } 00:26:05.146 ] 00:26:05.146 }, 00:26:05.146 { 00:26:05.146 "subsystem": "nbd", 00:26:05.146 "config": [] 00:26:05.146 } 00:26:05.146 ] 00:26:05.146 }' 00:26:05.146 05:32:09 keyring_file -- keyring/file.sh@114 -- # killprocess 99152 00:26:05.146 05:32:09 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99152 ']' 00:26:05.146 05:32:09 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99152 00:26:05.146 05:32:09 keyring_file -- common/autotest_common.sh@955 -- # uname 00:26:05.146 05:32:09 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:05.146 05:32:09 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99152 00:26:05.146 05:32:09 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:05.146 05:32:09 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:05.146 killing process with pid 99152 00:26:05.146 05:32:09 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99152' 00:26:05.146 05:32:09 keyring_file -- common/autotest_common.sh@969 -- # kill 99152 00:26:05.146 Received shutdown signal, test time was about 1.000000 seconds 00:26:05.146 00:26:05.146 Latency(us) 00:26:05.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.146 =================================================================================================================== 00:26:05.146 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:05.146 05:32:09 keyring_file -- common/autotest_common.sh@974 -- # wait 99152 00:26:05.406 05:32:09 keyring_file -- keyring/file.sh@117 -- # bperfpid=99630 00:26:05.406 05:32:09 keyring_file -- keyring/file.sh@119 -- # waitforlisten 99630 /var/tmp/bperf.sock 00:26:05.406 05:32:09 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99630 ']' 00:26:05.406 05:32:09 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:26:05.406 05:32:09 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:05.406 05:32:09 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:05.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:05.406 05:32:09 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:05.406 05:32:09 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:26:05.406 "subsystems": [ 00:26:05.406 { 00:26:05.406 "subsystem": "keyring", 00:26:05.406 "config": [ 00:26:05.406 { 00:26:05.406 "method": "keyring_file_add_key", 00:26:05.406 "params": { 00:26:05.406 "name": "key0", 00:26:05.406 "path": "/tmp/tmp.szp2JgYiIQ" 00:26:05.406 } 00:26:05.406 }, 00:26:05.406 { 00:26:05.406 "method": "keyring_file_add_key", 00:26:05.406 "params": { 00:26:05.406 "name": "key1", 00:26:05.406 "path": "/tmp/tmp.t3wkXBuHhb" 00:26:05.406 } 00:26:05.406 } 00:26:05.406 ] 00:26:05.406 }, 00:26:05.406 { 00:26:05.406 "subsystem": "iobuf", 00:26:05.406 "config": [ 00:26:05.406 { 00:26:05.406 "method": "iobuf_set_options", 00:26:05.406 "params": { 00:26:05.406 "large_bufsize": 135168, 00:26:05.406 "large_pool_count": 1024, 00:26:05.406 "small_bufsize": 8192, 00:26:05.406 "small_pool_count": 8192 00:26:05.406 } 00:26:05.406 } 00:26:05.406 ] 00:26:05.406 }, 00:26:05.406 { 00:26:05.406 "subsystem": "sock", 00:26:05.406 "config": [ 00:26:05.406 { 00:26:05.406 "method": "sock_set_default_impl", 00:26:05.406 "params": { 00:26:05.406 "impl_name": "posix" 00:26:05.406 } 00:26:05.406 }, 00:26:05.406 { 00:26:05.406 "method": "sock_impl_set_options", 00:26:05.406 "params": { 00:26:05.406 "enable_ktls": false, 00:26:05.406 "enable_placement_id": 0, 00:26:05.406 "enable_quickack": false, 00:26:05.406 "enable_recv_pipe": true, 00:26:05.406 "enable_zerocopy_send_client": false, 00:26:05.406 "enable_zerocopy_send_server": true, 00:26:05.406 "impl_name": "ssl", 00:26:05.406 "recv_buf_size": 4096, 00:26:05.406 "send_buf_size": 4096, 00:26:05.406 "tls_version": 0, 00:26:05.406 "zerocopy_threshold": 0 00:26:05.406 } 00:26:05.406 }, 00:26:05.406 { 00:26:05.406 "method": "sock_impl_set_options", 00:26:05.406 "params": { 00:26:05.406 "enable_ktls": false, 00:26:05.406 "enable_placement_id": 0, 00:26:05.406 "enable_quickack": false, 00:26:05.406 "enable_recv_pipe": true, 00:26:05.406 "enable_zerocopy_send_client": false, 00:26:05.406 "enable_zerocopy_send_server": true, 00:26:05.406 "impl_name": "posix", 00:26:05.406 "recv_buf_size": 2097152, 00:26:05.406 "send_buf_size": 2097152, 00:26:05.406 "tls_version": 0, 00:26:05.406 "zerocopy_threshold": 0 00:26:05.406 } 00:26:05.406 } 00:26:05.406 ] 00:26:05.406 }, 00:26:05.406 { 00:26:05.406 "subsystem": "vmd", 00:26:05.406 "config": [] 00:26:05.406 }, 00:26:05.406 { 00:26:05.406 "subsystem": "accel", 00:26:05.406 "config": [ 00:26:05.406 { 00:26:05.406 "method": "accel_set_options", 00:26:05.406 "params": { 00:26:05.406 "buf_count": 2048, 00:26:05.406 "large_cache_size": 16, 00:26:05.406 "sequence_count": 2048, 00:26:05.406 "small_cache_size": 128, 00:26:05.406 "task_count": 2048 00:26:05.406 } 00:26:05.406 } 00:26:05.406 ] 00:26:05.406 }, 00:26:05.406 { 00:26:05.406 "subsystem": "bdev", 00:26:05.406 "config": [ 00:26:05.406 { 00:26:05.406 "method": "bdev_set_options", 00:26:05.406 "params": { 00:26:05.406 "bdev_auto_examine": true, 00:26:05.406 "bdev_io_cache_size": 256, 00:26:05.406 "bdev_io_pool_size": 65535, 00:26:05.406 "iobuf_large_cache_size": 16, 00:26:05.406 "iobuf_small_cache_size": 128 00:26:05.406 } 00:26:05.406 }, 00:26:05.406 { 00:26:05.406 "method": "bdev_raid_set_options", 00:26:05.406 "params": { 00:26:05.406 "process_max_bandwidth_mb_sec": 0, 00:26:05.406 "process_window_size_kb": 1024 00:26:05.406 } 00:26:05.406 }, 00:26:05.406 { 00:26:05.406 "method": "bdev_iscsi_set_options", 00:26:05.406 "params": { 00:26:05.406 "timeout_sec": 30 00:26:05.406 } 00:26:05.406 }, 00:26:05.406 { 00:26:05.406 "method": "bdev_nvme_set_options", 00:26:05.406 "params": { 00:26:05.406 "action_on_timeout": "none", 00:26:05.406 "allow_accel_sequence": false, 00:26:05.406 "arbitration_burst": 0, 00:26:05.406 "bdev_retry_count": 3, 00:26:05.406 "ctrlr_loss_timeout_sec": 0, 00:26:05.406 "delay_cmd_submit": true, 00:26:05.406 "dhchap_dhgroups": [ 00:26:05.406 "null", 00:26:05.406 "ffdhe2048", 00:26:05.406 "ffdhe3072", 00:26:05.406 "ffdhe4096", 00:26:05.406 "ffdhe6144", 00:26:05.406 "ffdhe8192" 00:26:05.406 ], 00:26:05.406 "dhchap_digests": [ 00:26:05.406 "sha256", 00:26:05.406 "sha384", 00:26:05.406 "sha512" 00:26:05.406 ], 00:26:05.406 "disable_auto_failback": false, 00:26:05.406 "fast_io_fail_timeout_sec": 0, 00:26:05.406 "generate_uuids": false, 00:26:05.406 "high_priority_weight": 0, 00:26:05.406 "io_path_stat": false, 00:26:05.406 "io_queue_requests": 512, 00:26:05.406 "keep_alive_timeout_ms": 10000, 00:26:05.406 "low_priority_weight": 0, 00:26:05.406 "medium_priority_weight": 0, 00:26:05.406 "nvme_adminq_poll_period_us": 10000, 00:26:05.406 "nvme_error_stat": false, 00:26:05.406 "nvme_ioq_poll_period_us": 0, 00:26:05.406 "rdma_cm_event_timeout_ms": 0, 00:26:05.406 "rdma_max_cq_size": 0, 00:26:05.406 "rdma_srq_size": 0, 00:26:05.406 "reconnect_delay_sec": 0, 00:26:05.406 "timeout_admin_us": 0, 00:26:05.406 "timeout_us": 0, 00:26:05.406 "transport_ack_timeout": 0, 00:26:05.406 "transport_retry_count": 4, 00:26:05.406 "transport_tos": 0 00:26:05.406 } 00:26:05.406 }, 00:26:05.406 { 00:26:05.406 "method": "bdev_nvme_attach_controller", 00:26:05.406 "params": { 00:26:05.406 "adrfam": "IPv4", 00:26:05.406 "ctrlr_loss_timeout_sec": 0, 00:26:05.406 "ddgst": false, 00:26:05.406 "fast_io_fail_timeout_sec": 0, 00:26:05.407 "hdgst": false, 00:26:05.407 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:05.407 "name": "nvme0", 00:26:05.407 "prchk_guard": false, 00:26:05.407 "prchk_reftag": false, 00:26:05.407 "psk": "key0", 00:26:05.407 "reconnect_delay_sec": 0, 00:26:05.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:05.407 "traddr": "127.0.0.1", 00:26:05.407 "trsvcid": "4420", 00:26:05.407 "trtype": "TCP" 00:26:05.407 } 00:26:05.407 }, 00:26:05.407 { 00:26:05.407 "method": "bdev_nvme_set_hotplug", 00:26:05.407 "params": { 00:26:05.407 "enable": false, 00:26:05.407 "period_us": 100000 00:26:05.407 } 00:26:05.407 }, 00:26:05.407 { 00:26:05.407 "method": "bdev_wait_for_examine" 00:26:05.407 } 00:26:05.407 ] 00:26:05.407 }, 00:26:05.407 { 00:26:05.407 "subsystem": "nbd", 00:26:05.407 "config": [] 00:26:05.407 } 00:26:05.407 ] 00:26:05.407 }' 00:26:05.407 05:32:09 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:05.407 05:32:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:05.407 [2024-07-25 05:32:09.407886] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:26:05.407 [2024-07-25 05:32:09.408541] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99630 ] 00:26:05.407 [2024-07-25 05:32:09.546401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.665 [2024-07-25 05:32:09.605208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.665 [2024-07-25 05:32:09.746029] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:06.231 05:32:10 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:06.231 05:32:10 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:26:06.231 05:32:10 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:26:06.231 05:32:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:06.231 05:32:10 keyring_file -- keyring/file.sh@120 -- # jq length 00:26:06.799 05:32:10 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:26:06.799 05:32:10 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:26:06.799 05:32:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:06.799 05:32:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:06.799 05:32:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:06.799 05:32:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:06.799 05:32:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:06.799 05:32:10 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:26:06.799 05:32:10 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:26:06.799 05:32:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:06.799 05:32:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:06.799 05:32:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:06.799 05:32:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:06.799 05:32:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:07.366 05:32:11 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:26:07.366 05:32:11 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:26:07.366 05:32:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:26:07.366 05:32:11 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:26:07.366 05:32:11 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:26:07.366 05:32:11 keyring_file -- keyring/file.sh@1 -- # cleanup 00:26:07.366 05:32:11 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.szp2JgYiIQ /tmp/tmp.t3wkXBuHhb 00:26:07.366 05:32:11 keyring_file -- keyring/file.sh@20 -- # killprocess 99630 00:26:07.366 05:32:11 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99630 ']' 00:26:07.366 05:32:11 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99630 00:26:07.366 05:32:11 keyring_file -- common/autotest_common.sh@955 -- # uname 00:26:07.366 05:32:11 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:07.366 05:32:11 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99630 00:26:07.625 killing process with pid 99630 00:26:07.625 Received shutdown signal, test time was about 1.000000 seconds 00:26:07.625 00:26:07.625 Latency(us) 00:26:07.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.625 =================================================================================================================== 00:26:07.625 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:07.625 05:32:11 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:07.625 05:32:11 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:07.625 05:32:11 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99630' 00:26:07.625 05:32:11 keyring_file -- common/autotest_common.sh@969 -- # kill 99630 00:26:07.625 05:32:11 keyring_file -- common/autotest_common.sh@974 -- # wait 99630 00:26:07.625 05:32:11 keyring_file -- keyring/file.sh@21 -- # killprocess 99117 00:26:07.625 05:32:11 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99117 ']' 00:26:07.625 05:32:11 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99117 00:26:07.625 05:32:11 keyring_file -- common/autotest_common.sh@955 -- # uname 00:26:07.625 05:32:11 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:07.625 05:32:11 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99117 00:26:07.625 killing process with pid 99117 00:26:07.625 05:32:11 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:07.625 05:32:11 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:07.625 05:32:11 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99117' 00:26:07.625 05:32:11 keyring_file -- common/autotest_common.sh@969 -- # kill 99117 00:26:07.625 [2024-07-25 05:32:11.740299] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:07.625 05:32:11 keyring_file -- common/autotest_common.sh@974 -- # wait 99117 00:26:07.883 00:26:07.883 real 0m16.617s 00:26:07.883 user 0m42.092s 00:26:07.883 sys 0m3.141s 00:26:07.883 05:32:11 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:07.883 05:32:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:07.883 ************************************ 00:26:07.883 END TEST keyring_file 00:26:07.883 ************************************ 00:26:07.883 05:32:12 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:26:07.883 05:32:12 -- spdk/autotest.sh@301 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:26:07.883 05:32:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:07.883 05:32:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:07.883 05:32:12 -- common/autotest_common.sh@10 -- # set +x 00:26:07.883 ************************************ 00:26:07.883 START TEST keyring_linux 00:26:07.883 ************************************ 00:26:07.883 05:32:12 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:26:08.142 * Looking for test storage... 00:26:08.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:26:08.142 05:32:12 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:26:08.142 05:32:12 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:08.142 05:32:12 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:26:08.142 05:32:12 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:08.142 05:32:12 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:08.142 05:32:12 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:08.142 05:32:12 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:08.142 05:32:12 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:08.142 05:32:12 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=4a1ea60a-4fd7-4dce-a986-fc2ce9988d78 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:08.143 05:32:12 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.143 05:32:12 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.143 05:32:12 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.143 05:32:12 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.143 05:32:12 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.143 05:32:12 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.143 05:32:12 keyring_linux -- paths/export.sh@5 -- # export PATH 00:26:08.143 05:32:12 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:08.143 05:32:12 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:08.143 05:32:12 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:08.143 05:32:12 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:08.143 05:32:12 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:26:08.143 05:32:12 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:26:08.143 05:32:12 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:26:08.143 05:32:12 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:26:08.143 05:32:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:26:08.143 05:32:12 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:26:08.143 05:32:12 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:08.143 05:32:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:26:08.143 05:32:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:26:08.143 05:32:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@705 -- # python - 00:26:08.143 05:32:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:26:08.143 /tmp/:spdk-test:key0 00:26:08.143 05:32:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:26:08.143 05:32:12 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:26:08.143 05:32:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:26:08.143 05:32:12 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:26:08.143 05:32:12 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:08.143 05:32:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:26:08.143 05:32:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:26:08.143 05:32:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:26:08.143 05:32:12 keyring_linux -- nvmf/common.sh@705 -- # python - 00:26:08.143 05:32:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:26:08.143 /tmp/:spdk-test:key1 00:26:08.143 05:32:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:26:08.143 05:32:12 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=99779 00:26:08.143 05:32:12 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:08.143 05:32:12 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 99779 00:26:08.143 05:32:12 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 99779 ']' 00:26:08.143 05:32:12 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.143 05:32:12 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:08.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.143 05:32:12 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.143 05:32:12 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:08.143 05:32:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:08.402 [2024-07-25 05:32:12.321637] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:26:08.402 [2024-07-25 05:32:12.321722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99779 ] 00:26:08.402 [2024-07-25 05:32:12.457698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.402 [2024-07-25 05:32:12.536024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.661 05:32:12 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:08.661 05:32:12 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:26:08.661 05:32:12 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:26:08.661 05:32:12 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.661 05:32:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:08.661 [2024-07-25 05:32:12.726204] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.661 null0 00:26:08.661 [2024-07-25 05:32:12.758230] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:08.661 [2024-07-25 05:32:12.758469] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:08.661 05:32:12 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.661 05:32:12 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:26:08.661 427590889 00:26:08.661 05:32:12 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:26:08.661 560481014 00:26:08.661 05:32:12 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=99800 00:26:08.661 05:32:12 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:26:08.661 05:32:12 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 99800 /var/tmp/bperf.sock 00:26:08.661 05:32:12 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 99800 ']' 00:26:08.661 05:32:12 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:08.661 05:32:12 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:08.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:08.661 05:32:12 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:08.661 05:32:12 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:08.661 05:32:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:08.920 [2024-07-25 05:32:12.842768] Starting SPDK v24.09-pre git sha1 6a0934c18 / DPDK 24.03.0 initialization... 00:26:08.920 [2024-07-25 05:32:12.842865] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99800 ] 00:26:08.920 [2024-07-25 05:32:12.982089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.920 [2024-07-25 05:32:13.046509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.920 05:32:13 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:08.920 05:32:13 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:26:08.920 05:32:13 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:26:08.920 05:32:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:26:09.225 05:32:13 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:26:09.225 05:32:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:09.791 05:32:13 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:26:09.791 05:32:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:26:09.791 [2024-07-25 05:32:13.948636] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:10.050 nvme0n1 00:26:10.050 05:32:14 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:26:10.050 05:32:14 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:26:10.050 05:32:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:26:10.050 05:32:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:26:10.050 05:32:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:26:10.050 05:32:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:10.308 05:32:14 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:26:10.308 05:32:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:26:10.308 05:32:14 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:26:10.308 05:32:14 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:26:10.308 05:32:14 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:10.308 05:32:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:10.308 05:32:14 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:26:10.567 05:32:14 keyring_linux -- keyring/linux.sh@25 -- # sn=427590889 00:26:10.567 05:32:14 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:26:10.567 05:32:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:26:10.567 05:32:14 keyring_linux -- keyring/linux.sh@26 -- # [[ 427590889 == \4\2\7\5\9\0\8\8\9 ]] 00:26:10.567 05:32:14 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 427590889 00:26:10.567 05:32:14 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:26:10.567 05:32:14 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:10.826 Running I/O for 1 seconds... 00:26:11.764 00:26:11.764 Latency(us) 00:26:11.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.764 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:11.764 nvme0n1 : 1.01 11645.57 45.49 0.00 0.00 10919.46 6881.28 17396.83 00:26:11.764 =================================================================================================================== 00:26:11.764 Total : 11645.57 45.49 0.00 0.00 10919.46 6881.28 17396.83 00:26:11.764 0 00:26:11.764 05:32:15 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:11.764 05:32:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:12.023 05:32:16 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:26:12.023 05:32:16 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:26:12.023 05:32:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:26:12.023 05:32:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:26:12.023 05:32:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:26:12.023 05:32:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:12.282 05:32:16 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:26:12.282 05:32:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:26:12.282 05:32:16 keyring_linux -- keyring/linux.sh@23 -- # return 00:26:12.283 05:32:16 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:12.283 05:32:16 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:26:12.283 05:32:16 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:12.283 05:32:16 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:12.283 05:32:16 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:12.283 05:32:16 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:12.283 05:32:16 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:12.283 05:32:16 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:12.283 05:32:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:12.542 [2024-07-25 05:32:16.663994] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:12.542 [2024-07-25 05:32:16.664556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204aea0 (107): Transport endpoint is not connected 00:26:12.542 [2024-07-25 05:32:16.665544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204aea0 (9): Bad file descriptor 00:26:12.542 [2024-07-25 05:32:16.666541] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:12.542 [2024-07-25 05:32:16.666565] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:12.542 [2024-07-25 05:32:16.666575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:12.542 2024/07/25 05:32:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:26:12.542 request: 00:26:12.542 { 00:26:12.542 "method": "bdev_nvme_attach_controller", 00:26:12.542 "params": { 00:26:12.542 "name": "nvme0", 00:26:12.542 "trtype": "tcp", 00:26:12.542 "traddr": "127.0.0.1", 00:26:12.542 "adrfam": "ipv4", 00:26:12.542 "trsvcid": "4420", 00:26:12.542 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:12.542 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:12.542 "prchk_reftag": false, 00:26:12.542 "prchk_guard": false, 00:26:12.542 "hdgst": false, 00:26:12.542 "ddgst": false, 00:26:12.542 "psk": ":spdk-test:key1" 00:26:12.542 } 00:26:12.542 } 00:26:12.542 Got JSON-RPC error response 00:26:12.542 GoRPCClient: error on JSON-RPC call 00:26:12.542 05:32:16 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:26:12.542 05:32:16 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:12.542 05:32:16 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:12.542 05:32:16 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:12.542 05:32:16 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:26:12.542 05:32:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:26:12.542 05:32:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:26:12.542 05:32:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:26:12.542 05:32:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:26:12.542 05:32:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:26:12.542 05:32:16 keyring_linux -- keyring/linux.sh@33 -- # sn=427590889 00:26:12.542 05:32:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 427590889 00:26:12.542 1 links removed 00:26:12.542 05:32:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:26:12.542 05:32:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:26:12.542 05:32:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:26:12.542 05:32:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:26:12.542 05:32:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:26:12.542 05:32:16 keyring_linux -- keyring/linux.sh@33 -- # sn=560481014 00:26:12.542 05:32:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 560481014 00:26:12.542 1 links removed 00:26:12.542 05:32:16 keyring_linux -- keyring/linux.sh@41 -- # killprocess 99800 00:26:12.542 05:32:16 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 99800 ']' 00:26:12.542 05:32:16 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 99800 00:26:12.542 05:32:16 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:26:12.542 05:32:16 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:12.542 05:32:16 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99800 00:26:12.801 05:32:16 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:12.801 05:32:16 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:12.801 killing process with pid 99800 00:26:12.801 05:32:16 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99800' 00:26:12.801 05:32:16 keyring_linux -- common/autotest_common.sh@969 -- # kill 99800 00:26:12.801 Received shutdown signal, test time was about 1.000000 seconds 00:26:12.801 00:26:12.801 Latency(us) 00:26:12.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.801 =================================================================================================================== 00:26:12.801 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:12.801 05:32:16 keyring_linux -- common/autotest_common.sh@974 -- # wait 99800 00:26:12.801 05:32:16 keyring_linux -- keyring/linux.sh@42 -- # killprocess 99779 00:26:12.801 05:32:16 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 99779 ']' 00:26:12.801 05:32:16 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 99779 00:26:12.801 05:32:16 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:26:12.801 05:32:16 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:12.801 05:32:16 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99779 00:26:12.801 05:32:16 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:12.801 05:32:16 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:12.801 killing process with pid 99779 00:26:12.801 05:32:16 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99779' 00:26:12.801 05:32:16 keyring_linux -- common/autotest_common.sh@969 -- # kill 99779 00:26:12.802 05:32:16 keyring_linux -- common/autotest_common.sh@974 -- # wait 99779 00:26:13.060 00:26:13.060 real 0m5.141s 00:26:13.060 user 0m10.586s 00:26:13.060 sys 0m1.418s 00:26:13.060 05:32:17 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:13.060 ************************************ 00:26:13.060 END TEST keyring_linux 00:26:13.060 ************************************ 00:26:13.060 05:32:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:13.060 05:32:17 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:26:13.060 05:32:17 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:26:13.060 05:32:17 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:26:13.060 05:32:17 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:26:13.060 05:32:17 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:26:13.060 05:32:17 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:26:13.060 05:32:17 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:26:13.060 05:32:17 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:26:13.060 05:32:17 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:26:13.060 05:32:17 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:26:13.060 05:32:17 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:26:13.060 05:32:17 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:26:13.060 05:32:17 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:26:13.060 05:32:17 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:26:13.060 05:32:17 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:26:13.060 05:32:17 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:26:13.060 05:32:17 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:26:13.060 05:32:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:13.060 05:32:17 -- common/autotest_common.sh@10 -- # set +x 00:26:13.060 05:32:17 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:26:13.060 05:32:17 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:26:13.060 05:32:17 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:26:13.060 05:32:17 -- common/autotest_common.sh@10 -- # set +x 00:26:14.962 INFO: APP EXITING 00:26:14.963 INFO: killing all VMs 00:26:14.963 INFO: killing vhost app 00:26:14.963 INFO: EXIT DONE 00:26:15.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:15.531 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:26:15.531 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:26:16.099 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:16.358 Cleaning 00:26:16.358 Removing: /var/run/dpdk/spdk0/config 00:26:16.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:16.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:16.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:16.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:16.358 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:16.358 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:16.358 Removing: /var/run/dpdk/spdk1/config 00:26:16.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:16.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:16.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:16.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:16.358 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:16.358 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:16.358 Removing: /var/run/dpdk/spdk2/config 00:26:16.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:16.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:16.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:16.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:16.358 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:16.358 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:16.358 Removing: /var/run/dpdk/spdk3/config 00:26:16.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:16.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:16.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:16.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:16.358 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:16.358 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:16.358 Removing: /var/run/dpdk/spdk4/config 00:26:16.358 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:16.358 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:16.358 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:16.358 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:16.358 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:16.358 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:16.358 Removing: /dev/shm/nvmf_trace.0 00:26:16.358 Removing: /dev/shm/spdk_tgt_trace.pid60774 00:26:16.358 Removing: /var/run/dpdk/spdk0 00:26:16.358 Removing: /var/run/dpdk/spdk1 00:26:16.358 Removing: /var/run/dpdk/spdk2 00:26:16.358 Removing: /var/run/dpdk/spdk3 00:26:16.358 Removing: /var/run/dpdk/spdk4 00:26:16.358 Removing: /var/run/dpdk/spdk_pid60629 00:26:16.358 Removing: /var/run/dpdk/spdk_pid60774 00:26:16.358 Removing: /var/run/dpdk/spdk_pid61035 00:26:16.358 Removing: /var/run/dpdk/spdk_pid61122 00:26:16.358 Removing: /var/run/dpdk/spdk_pid61148 00:26:16.358 Removing: /var/run/dpdk/spdk_pid61252 00:26:16.358 Removing: /var/run/dpdk/spdk_pid61282 00:26:16.358 Removing: /var/run/dpdk/spdk_pid61400 00:26:16.358 Removing: /var/run/dpdk/spdk_pid61686 00:26:16.358 Removing: /var/run/dpdk/spdk_pid61863 00:26:16.358 Removing: /var/run/dpdk/spdk_pid61940 00:26:16.358 Removing: /var/run/dpdk/spdk_pid62027 00:26:16.358 Removing: /var/run/dpdk/spdk_pid62117 00:26:16.358 Removing: /var/run/dpdk/spdk_pid62155 00:26:16.358 Removing: /var/run/dpdk/spdk_pid62191 00:26:16.358 Removing: /var/run/dpdk/spdk_pid62247 00:26:16.358 Removing: /var/run/dpdk/spdk_pid62348 00:26:16.358 Removing: /var/run/dpdk/spdk_pid62970 00:26:16.358 Removing: /var/run/dpdk/spdk_pid63029 00:26:16.358 Removing: /var/run/dpdk/spdk_pid63085 00:26:16.358 Removing: /var/run/dpdk/spdk_pid63113 00:26:16.358 Removing: /var/run/dpdk/spdk_pid63187 00:26:16.358 Removing: /var/run/dpdk/spdk_pid63215 00:26:16.358 Removing: /var/run/dpdk/spdk_pid63288 00:26:16.358 Removing: /var/run/dpdk/spdk_pid63316 00:26:16.358 Removing: /var/run/dpdk/spdk_pid63368 00:26:16.358 Removing: /var/run/dpdk/spdk_pid63398 00:26:16.358 Removing: /var/run/dpdk/spdk_pid63444 00:26:16.358 Removing: /var/run/dpdk/spdk_pid63474 00:26:16.358 Removing: /var/run/dpdk/spdk_pid63626 00:26:16.358 Removing: /var/run/dpdk/spdk_pid63656 00:26:16.358 Removing: /var/run/dpdk/spdk_pid63730 00:26:16.358 Removing: /var/run/dpdk/spdk_pid64147 00:26:16.358 Removing: /var/run/dpdk/spdk_pid64467 00:26:16.358 Removing: /var/run/dpdk/spdk_pid66945 00:26:16.358 Removing: /var/run/dpdk/spdk_pid66996 00:26:16.358 Removing: /var/run/dpdk/spdk_pid67280 00:26:16.359 Removing: /var/run/dpdk/spdk_pid67326 00:26:16.359 Removing: /var/run/dpdk/spdk_pid67690 00:26:16.617 Removing: /var/run/dpdk/spdk_pid68229 00:26:16.617 Removing: /var/run/dpdk/spdk_pid68673 00:26:16.617 Removing: /var/run/dpdk/spdk_pid69598 00:26:16.617 Removing: /var/run/dpdk/spdk_pid70553 00:26:16.617 Removing: /var/run/dpdk/spdk_pid70675 00:26:16.617 Removing: /var/run/dpdk/spdk_pid70737 00:26:16.617 Removing: /var/run/dpdk/spdk_pid72172 00:26:16.617 Removing: /var/run/dpdk/spdk_pid72453 00:26:16.617 Removing: /var/run/dpdk/spdk_pid75725 00:26:16.617 Removing: /var/run/dpdk/spdk_pid76101 00:26:16.617 Removing: /var/run/dpdk/spdk_pid76694 00:26:16.617 Removing: /var/run/dpdk/spdk_pid77103 00:26:16.617 Removing: /var/run/dpdk/spdk_pid82521 00:26:16.617 Removing: /var/run/dpdk/spdk_pid82963 00:26:16.617 Removing: /var/run/dpdk/spdk_pid83070 00:26:16.617 Removing: /var/run/dpdk/spdk_pid83204 00:26:16.617 Removing: /var/run/dpdk/spdk_pid83236 00:26:16.617 Removing: /var/run/dpdk/spdk_pid83276 00:26:16.617 Removing: /var/run/dpdk/spdk_pid83322 00:26:16.617 Removing: /var/run/dpdk/spdk_pid83472 00:26:16.617 Removing: /var/run/dpdk/spdk_pid83606 00:26:16.617 Removing: /var/run/dpdk/spdk_pid83856 00:26:16.617 Removing: /var/run/dpdk/spdk_pid83961 00:26:16.617 Removing: /var/run/dpdk/spdk_pid84209 00:26:16.617 Removing: /var/run/dpdk/spdk_pid84315 00:26:16.617 Removing: /var/run/dpdk/spdk_pid84450 00:26:16.617 Removing: /var/run/dpdk/spdk_pid84797 00:26:16.617 Removing: /var/run/dpdk/spdk_pid85234 00:26:16.617 Removing: /var/run/dpdk/spdk_pid85483 00:26:16.617 Removing: /var/run/dpdk/spdk_pid85962 00:26:16.617 Removing: /var/run/dpdk/spdk_pid85970 00:26:16.617 Removing: /var/run/dpdk/spdk_pid86312 00:26:16.617 Removing: /var/run/dpdk/spdk_pid86332 00:26:16.617 Removing: /var/run/dpdk/spdk_pid86346 00:26:16.617 Removing: /var/run/dpdk/spdk_pid86378 00:26:16.617 Removing: /var/run/dpdk/spdk_pid86387 00:26:16.617 Removing: /var/run/dpdk/spdk_pid86751 00:26:16.617 Removing: /var/run/dpdk/spdk_pid86794 00:26:16.617 Removing: /var/run/dpdk/spdk_pid87116 00:26:16.617 Removing: /var/run/dpdk/spdk_pid87348 00:26:16.617 Removing: /var/run/dpdk/spdk_pid87852 00:26:16.617 Removing: /var/run/dpdk/spdk_pid88435 00:26:16.617 Removing: /var/run/dpdk/spdk_pid89791 00:26:16.617 Removing: /var/run/dpdk/spdk_pid90393 00:26:16.617 Removing: /var/run/dpdk/spdk_pid90395 00:26:16.617 Removing: /var/run/dpdk/spdk_pid92341 00:26:16.617 Removing: /var/run/dpdk/spdk_pid92431 00:26:16.617 Removing: /var/run/dpdk/spdk_pid92522 00:26:16.617 Removing: /var/run/dpdk/spdk_pid92592 00:26:16.617 Removing: /var/run/dpdk/spdk_pid92738 00:26:16.617 Removing: /var/run/dpdk/spdk_pid92823 00:26:16.617 Removing: /var/run/dpdk/spdk_pid92919 00:26:16.617 Removing: /var/run/dpdk/spdk_pid92990 00:26:16.617 Removing: /var/run/dpdk/spdk_pid93335 00:26:16.617 Removing: /var/run/dpdk/spdk_pid93994 00:26:16.617 Removing: /var/run/dpdk/spdk_pid95324 00:26:16.617 Removing: /var/run/dpdk/spdk_pid95511 00:26:16.617 Removing: /var/run/dpdk/spdk_pid95802 00:26:16.617 Removing: /var/run/dpdk/spdk_pid96099 00:26:16.617 Removing: /var/run/dpdk/spdk_pid96611 00:26:16.617 Removing: /var/run/dpdk/spdk_pid96626 00:26:16.617 Removing: /var/run/dpdk/spdk_pid96982 00:26:16.617 Removing: /var/run/dpdk/spdk_pid97136 00:26:16.617 Removing: /var/run/dpdk/spdk_pid97294 00:26:16.617 Removing: /var/run/dpdk/spdk_pid97391 00:26:16.617 Removing: /var/run/dpdk/spdk_pid97546 00:26:16.617 Removing: /var/run/dpdk/spdk_pid97654 00:26:16.617 Removing: /var/run/dpdk/spdk_pid98303 00:26:16.617 Removing: /var/run/dpdk/spdk_pid98338 00:26:16.617 Removing: /var/run/dpdk/spdk_pid98372 00:26:16.617 Removing: /var/run/dpdk/spdk_pid98627 00:26:16.617 Removing: /var/run/dpdk/spdk_pid98662 00:26:16.617 Removing: /var/run/dpdk/spdk_pid98692 00:26:16.617 Removing: /var/run/dpdk/spdk_pid99117 00:26:16.617 Removing: /var/run/dpdk/spdk_pid99152 00:26:16.617 Removing: /var/run/dpdk/spdk_pid99630 00:26:16.617 Removing: /var/run/dpdk/spdk_pid99779 00:26:16.617 Removing: /var/run/dpdk/spdk_pid99800 00:26:16.617 Clean 00:26:16.876 05:32:20 -- common/autotest_common.sh@1451 -- # return 0 00:26:16.876 05:32:20 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:26:16.876 05:32:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:16.876 05:32:20 -- common/autotest_common.sh@10 -- # set +x 00:26:16.876 05:32:20 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:26:16.876 05:32:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:16.876 05:32:20 -- common/autotest_common.sh@10 -- # set +x 00:26:16.876 05:32:20 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:16.876 05:32:20 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:16.876 05:32:20 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:16.876 05:32:20 -- spdk/autotest.sh@395 -- # hash lcov 00:26:16.876 05:32:20 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:26:16.876 05:32:20 -- spdk/autotest.sh@397 -- # hostname 00:26:16.876 05:32:20 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:17.136 geninfo: WARNING: invalid characters removed from testname! 00:26:43.673 05:32:46 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:46.961 05:32:50 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:49.490 05:32:53 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:52.773 05:32:56 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:55.306 05:32:58 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:57.842 05:33:01 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:00.375 05:33:04 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:00.375 05:33:04 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:00.375 05:33:04 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:00.375 05:33:04 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.375 05:33:04 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.375 05:33:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.375 05:33:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.375 05:33:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.375 05:33:04 -- paths/export.sh@5 -- $ export PATH 00:27:00.375 05:33:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.375 05:33:04 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:00.375 05:33:04 -- common/autobuild_common.sh@447 -- $ date +%s 00:27:00.375 05:33:04 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721885584.XXXXXX 00:27:00.375 05:33:04 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721885584.fH7FGc 00:27:00.375 05:33:04 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:27:00.375 05:33:04 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:27:00.375 05:33:04 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:27:00.375 05:33:04 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:00.375 05:33:04 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:00.375 05:33:04 -- common/autobuild_common.sh@463 -- $ get_config_params 00:27:00.375 05:33:04 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:27:00.375 05:33:04 -- common/autotest_common.sh@10 -- $ set +x 00:27:00.376 05:33:04 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:27:00.376 05:33:04 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:27:00.376 05:33:04 -- pm/common@17 -- $ local monitor 00:27:00.376 05:33:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:00.376 05:33:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:00.376 05:33:04 -- pm/common@25 -- $ sleep 1 00:27:00.376 05:33:04 -- pm/common@21 -- $ date +%s 00:27:00.376 05:33:04 -- pm/common@21 -- $ date +%s 00:27:00.376 05:33:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721885584 00:27:00.376 05:33:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721885584 00:27:00.376 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721885584_collect-vmstat.pm.log 00:27:00.376 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721885584_collect-cpu-load.pm.log 00:27:01.754 05:33:05 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:27:01.754 05:33:05 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:01.754 05:33:05 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:01.754 05:33:05 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:01.754 05:33:05 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:01.754 05:33:05 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:01.754 05:33:05 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:01.754 05:33:05 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:01.754 05:33:05 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:01.754 05:33:05 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:01.754 05:33:05 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:27:01.754 05:33:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:27:01.754 05:33:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:27:01.754 05:33:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:01.754 05:33:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:27:01.754 05:33:05 -- pm/common@44 -- $ pid=101468 00:27:01.754 05:33:05 -- pm/common@50 -- $ kill -TERM 101468 00:27:01.754 05:33:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:01.754 05:33:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:27:01.754 05:33:05 -- pm/common@44 -- $ pid=101470 00:27:01.754 05:33:05 -- pm/common@50 -- $ kill -TERM 101470 00:27:01.754 + [[ -n 5147 ]] 00:27:01.754 + sudo kill 5147 00:27:01.764 [Pipeline] } 00:27:01.785 [Pipeline] // timeout 00:27:01.791 [Pipeline] } 00:27:01.809 [Pipeline] // stage 00:27:01.816 [Pipeline] } 00:27:01.833 [Pipeline] // catchError 00:27:01.843 [Pipeline] stage 00:27:01.846 [Pipeline] { (Stop VM) 00:27:01.860 [Pipeline] sh 00:27:02.140 + vagrant halt 00:27:06.332 ==> default: Halting domain... 00:27:11.647 [Pipeline] sh 00:27:11.927 + vagrant destroy -f 00:27:16.113 ==> default: Removing domain... 00:27:16.125 [Pipeline] sh 00:27:16.403 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:27:16.415 [Pipeline] } 00:27:16.433 [Pipeline] // stage 00:27:16.440 [Pipeline] } 00:27:16.458 [Pipeline] // dir 00:27:16.464 [Pipeline] } 00:27:16.482 [Pipeline] // wrap 00:27:16.490 [Pipeline] } 00:27:16.507 [Pipeline] // catchError 00:27:16.518 [Pipeline] stage 00:27:16.521 [Pipeline] { (Epilogue) 00:27:16.536 [Pipeline] sh 00:27:16.820 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:23.401 [Pipeline] catchError 00:27:23.403 [Pipeline] { 00:27:23.418 [Pipeline] sh 00:27:23.699 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:23.957 Artifacts sizes are good 00:27:23.966 [Pipeline] } 00:27:23.984 [Pipeline] // catchError 00:27:23.997 [Pipeline] archiveArtifacts 00:27:24.004 Archiving artifacts 00:27:24.181 [Pipeline] cleanWs 00:27:24.217 [WS-CLEANUP] Deleting project workspace... 00:27:24.217 [WS-CLEANUP] Deferred wipeout is used... 00:27:24.246 [WS-CLEANUP] done 00:27:24.248 [Pipeline] } 00:27:24.266 [Pipeline] // stage 00:27:24.272 [Pipeline] } 00:27:24.290 [Pipeline] // node 00:27:24.296 [Pipeline] End of Pipeline 00:27:24.333 Finished: SUCCESS